{ "version":"2.0", "metadata":{ "apiVersion":"2018-06-27", "endpointPrefix":"textract", "jsonVersion":"1.1", "protocol":"json", "serviceFullName":"Amazon Textract", "serviceId":"Textract", "signatureVersion":"v4", "targetPrefix":"Textract", "uid":"textract-2018-06-27" }, "operations":{ "AnalyzeDocument":{ "name":"AnalyzeDocument", "http":{ "method":"POST", "requestUri":"/" }, "input":{"shape":"AnalyzeDocumentRequest"}, "output":{"shape":"AnalyzeDocumentResponse"}, "errors":[ {"shape":"InvalidParameterException"}, {"shape":"InvalidS3ObjectException"}, {"shape":"UnsupportedDocumentException"}, {"shape":"DocumentTooLargeException"}, {"shape":"BadDocumentException"}, {"shape":"AccessDeniedException"}, {"shape":"ProvisionedThroughputExceededException"}, {"shape":"InternalServerError"}, {"shape":"ThrottlingException"}, {"shape":"HumanLoopQuotaExceededException"} ], "documentation":"

Analyzes an input document for relationships between detected items.

The types of information returned are as follows:

Selection elements such as check boxes and option buttons (radio buttons) can be detected in form data and in tables. A SELECTION_ELEMENT Block object contains information about a selection element, including the selection status.

You can choose which type of analysis to perform by specifying the FeatureTypes list.

The output is returned in a list of Block objects.

AnalyzeDocument is a synchronous operation. To analyze documents asynchronously, use StartDocumentAnalysis.

For more information, see Document Text Analysis.

" }, "AnalyzeExpense":{ "name":"AnalyzeExpense", "http":{ "method":"POST", "requestUri":"/" }, "input":{"shape":"AnalyzeExpenseRequest"}, "output":{"shape":"AnalyzeExpenseResponse"}, "errors":[ {"shape":"InvalidParameterException"}, {"shape":"InvalidS3ObjectException"}, {"shape":"UnsupportedDocumentException"}, {"shape":"DocumentTooLargeException"}, {"shape":"BadDocumentException"}, {"shape":"AccessDeniedException"}, {"shape":"ProvisionedThroughputExceededException"}, {"shape":"InternalServerError"}, {"shape":"ThrottlingException"} ], "documentation":"

AnalyzeExpense synchronously analyzes an input document for financially related relationships between text.

Information is returned as ExpenseDocuments and seperated as follows:

" }, "AnalyzeID":{ "name":"AnalyzeID", "http":{ "method":"POST", "requestUri":"/" }, "input":{"shape":"AnalyzeIDRequest"}, "output":{"shape":"AnalyzeIDResponse"}, "errors":[ {"shape":"InvalidParameterException"}, {"shape":"InvalidS3ObjectException"}, {"shape":"UnsupportedDocumentException"}, {"shape":"DocumentTooLargeException"}, {"shape":"BadDocumentException"}, {"shape":"AccessDeniedException"}, {"shape":"ProvisionedThroughputExceededException"}, {"shape":"InternalServerError"}, {"shape":"ThrottlingException"} ], "documentation":"

Analyzes identity documents for relevant information. This information is extracted and returned as IdentityDocumentFields, which records both the normalized field and value of the extracted text.Unlike other Amazon Textract operations, AnalyzeID doesn't return any Geometry data.

" }, "DetectDocumentText":{ "name":"DetectDocumentText", "http":{ "method":"POST", "requestUri":"/" }, "input":{"shape":"DetectDocumentTextRequest"}, "output":{"shape":"DetectDocumentTextResponse"}, "errors":[ {"shape":"InvalidParameterException"}, {"shape":"InvalidS3ObjectException"}, {"shape":"UnsupportedDocumentException"}, {"shape":"DocumentTooLargeException"}, {"shape":"BadDocumentException"}, {"shape":"AccessDeniedException"}, {"shape":"ProvisionedThroughputExceededException"}, {"shape":"InternalServerError"}, {"shape":"ThrottlingException"} ], "documentation":"

Detects text in the input document. Amazon Textract can detect lines of text and the words that make up a line of text. The input document must be in one of the following image formats: JPEG, PNG, PDF, or TIFF. DetectDocumentText returns the detected text in an array of Block objects.

Each document page has as an associated Block of type PAGE. Each PAGE Block object is the parent of LINE Block objects that represent the lines of detected text on a page. A LINE Block object is a parent for each word that makes up the line. Words are represented by Block objects of type WORD.

DetectDocumentText is a synchronous operation. To analyze documents asynchronously, use StartDocumentTextDetection.

For more information, see Document Text Detection.

" }, "GetDocumentAnalysis":{ "name":"GetDocumentAnalysis", "http":{ "method":"POST", "requestUri":"/" }, "input":{"shape":"GetDocumentAnalysisRequest"}, "output":{"shape":"GetDocumentAnalysisResponse"}, "errors":[ {"shape":"InvalidParameterException"}, {"shape":"AccessDeniedException"}, {"shape":"ProvisionedThroughputExceededException"}, {"shape":"InvalidJobIdException"}, {"shape":"InternalServerError"}, {"shape":"ThrottlingException"}, {"shape":"InvalidS3ObjectException"}, {"shape":"InvalidKMSKeyException"} ], "documentation":"

Gets the results for an Amazon Textract asynchronous operation that analyzes text in a document.

You start asynchronous text analysis by calling StartDocumentAnalysis, which returns a job identifier (JobId). When the text analysis operation finishes, Amazon Textract publishes a completion status to the Amazon Simple Notification Service (Amazon SNS) topic that's registered in the initial call to StartDocumentAnalysis. To get the results of the text-detection operation, first check that the status value published to the Amazon SNS topic is SUCCEEDED. If so, call GetDocumentAnalysis, and pass the job identifier (JobId) from the initial call to StartDocumentAnalysis.

GetDocumentAnalysis returns an array of Block objects. The following types of information are returned:

While processing a document with queries, look out for INVALID_REQUEST_PARAMETERS output. This indicates that either the per page query limit has been exceeded or that the operation is trying to query a page in the document which doesn’t exist.

Selection elements such as check boxes and option buttons (radio buttons) can be detected in form data and in tables. A SELECTION_ELEMENT Block object contains information about a selection element, including the selection status.

Use the MaxResults parameter to limit the number of blocks that are returned. If there are more results than specified in MaxResults, the value of NextToken in the operation response contains a pagination token for getting the next set of results. To get the next page of results, call GetDocumentAnalysis, and populate the NextToken request parameter with the token value that's returned from the previous call to GetDocumentAnalysis.

For more information, see Document Text Analysis.

" }, "GetDocumentTextDetection":{ "name":"GetDocumentTextDetection", "http":{ "method":"POST", "requestUri":"/" }, "input":{"shape":"GetDocumentTextDetectionRequest"}, "output":{"shape":"GetDocumentTextDetectionResponse"}, "errors":[ {"shape":"InvalidParameterException"}, {"shape":"AccessDeniedException"}, {"shape":"ProvisionedThroughputExceededException"}, {"shape":"InvalidJobIdException"}, {"shape":"InternalServerError"}, {"shape":"ThrottlingException"}, {"shape":"InvalidS3ObjectException"}, {"shape":"InvalidKMSKeyException"} ], "documentation":"

Gets the results for an Amazon Textract asynchronous operation that detects text in a document. Amazon Textract can detect lines of text and the words that make up a line of text.

You start asynchronous text detection by calling StartDocumentTextDetection, which returns a job identifier (JobId). When the text detection operation finishes, Amazon Textract publishes a completion status to the Amazon Simple Notification Service (Amazon SNS) topic that's registered in the initial call to StartDocumentTextDetection. To get the results of the text-detection operation, first check that the status value published to the Amazon SNS topic is SUCCEEDED. If so, call GetDocumentTextDetection, and pass the job identifier (JobId) from the initial call to StartDocumentTextDetection.

GetDocumentTextDetection returns an array of Block objects.

Each document page has as an associated Block of type PAGE. Each PAGE Block object is the parent of LINE Block objects that represent the lines of detected text on a page. A LINE Block object is a parent for each word that makes up the line. Words are represented by Block objects of type WORD.

Use the MaxResults parameter to limit the number of blocks that are returned. If there are more results than specified in MaxResults, the value of NextToken in the operation response contains a pagination token for getting the next set of results. To get the next page of results, call GetDocumentTextDetection, and populate the NextToken request parameter with the token value that's returned from the previous call to GetDocumentTextDetection.

For more information, see Document Text Detection.

" }, "GetExpenseAnalysis":{ "name":"GetExpenseAnalysis", "http":{ "method":"POST", "requestUri":"/" }, "input":{"shape":"GetExpenseAnalysisRequest"}, "output":{"shape":"GetExpenseAnalysisResponse"}, "errors":[ {"shape":"InvalidParameterException"}, {"shape":"AccessDeniedException"}, {"shape":"ProvisionedThroughputExceededException"}, {"shape":"InvalidJobIdException"}, {"shape":"InternalServerError"}, {"shape":"ThrottlingException"}, {"shape":"InvalidS3ObjectException"}, {"shape":"InvalidKMSKeyException"} ], "documentation":"

Gets the results for an Amazon Textract asynchronous operation that analyzes invoices and receipts. Amazon Textract finds contact information, items purchased, and vendor name, from input invoices and receipts.

You start asynchronous invoice/receipt analysis by calling StartExpenseAnalysis, which returns a job identifier (JobId). Upon completion of the invoice/receipt analysis, Amazon Textract publishes the completion status to the Amazon Simple Notification Service (Amazon SNS) topic. This topic must be registered in the initial call to StartExpenseAnalysis. To get the results of the invoice/receipt analysis operation, first ensure that the status value published to the Amazon SNS topic is SUCCEEDED. If so, call GetExpenseAnalysis, and pass the job identifier (JobId) from the initial call to StartExpenseAnalysis.

Use the MaxResults parameter to limit the number of blocks that are returned. If there are more results than specified in MaxResults, the value of NextToken in the operation response contains a pagination token for getting the next set of results. To get the next page of results, call GetExpenseAnalysis, and populate the NextToken request parameter with the token value that's returned from the previous call to GetExpenseAnalysis.

For more information, see Analyzing Invoices and Receipts.

" }, "GetLendingAnalysis":{ "name":"GetLendingAnalysis", "http":{ "method":"POST", "requestUri":"/" }, "input":{"shape":"GetLendingAnalysisRequest"}, "output":{"shape":"GetLendingAnalysisResponse"}, "errors":[ {"shape":"InvalidParameterException"}, {"shape":"AccessDeniedException"}, {"shape":"ProvisionedThroughputExceededException"}, {"shape":"InvalidJobIdException"}, {"shape":"InternalServerError"}, {"shape":"ThrottlingException"}, {"shape":"InvalidS3ObjectException"}, {"shape":"InvalidKMSKeyException"} ], "documentation":"

Gets the results for an Amazon Textract asynchronous operation that analyzes text in a lending document.

You start asynchronous text analysis by calling StartLendingAnalysis, which returns a job identifier (JobId). When the text analysis operation finishes, Amazon Textract publishes a completion status to the Amazon Simple Notification Service (Amazon SNS) topic that's registered in the initial call to StartLendingAnalysis.

To get the results of the text analysis operation, first check that the status value published to the Amazon SNS topic is SUCCEEDED. If so, call GetLendingAnalysis, and pass the job identifier (JobId) from the initial call to StartLendingAnalysis.

" }, "GetLendingAnalysisSummary":{ "name":"GetLendingAnalysisSummary", "http":{ "method":"POST", "requestUri":"/" }, "input":{"shape":"GetLendingAnalysisSummaryRequest"}, "output":{"shape":"GetLendingAnalysisSummaryResponse"}, "errors":[ {"shape":"InvalidParameterException"}, {"shape":"AccessDeniedException"}, {"shape":"ProvisionedThroughputExceededException"}, {"shape":"InvalidJobIdException"}, {"shape":"InternalServerError"}, {"shape":"ThrottlingException"}, {"shape":"InvalidS3ObjectException"}, {"shape":"InvalidKMSKeyException"} ], "documentation":"

Gets summarized results for the StartLendingAnalysis operation, which analyzes text in a lending document. The returned summary consists of information about documents grouped together by a common document type. Information like detected signatures, page numbers, and split documents is returned with respect to the type of grouped document.

You start asynchronous text analysis by calling StartLendingAnalysis, which returns a job identifier (JobId). When the text analysis operation finishes, Amazon Textract publishes a completion status to the Amazon Simple Notification Service (Amazon SNS) topic that's registered in the initial call to StartLendingAnalysis.

To get the results of the text analysis operation, first check that the status value published to the Amazon SNS topic is SUCCEEDED. If so, call GetLendingAnalysisSummary, and pass the job identifier (JobId) from the initial call to StartLendingAnalysis.

" }, "StartDocumentAnalysis":{ "name":"StartDocumentAnalysis", "http":{ "method":"POST", "requestUri":"/" }, "input":{"shape":"StartDocumentAnalysisRequest"}, "output":{"shape":"StartDocumentAnalysisResponse"}, "errors":[ {"shape":"InvalidParameterException"}, {"shape":"InvalidS3ObjectException"}, {"shape":"InvalidKMSKeyException"}, {"shape":"UnsupportedDocumentException"}, {"shape":"DocumentTooLargeException"}, {"shape":"BadDocumentException"}, {"shape":"AccessDeniedException"}, {"shape":"ProvisionedThroughputExceededException"}, {"shape":"InternalServerError"}, {"shape":"IdempotentParameterMismatchException"}, {"shape":"ThrottlingException"}, {"shape":"LimitExceededException"} ], "documentation":"

Starts the asynchronous analysis of an input document for relationships between detected items such as key-value pairs, tables, and selection elements.

StartDocumentAnalysis can analyze text in documents that are in JPEG, PNG, TIFF, and PDF format. The documents are stored in an Amazon S3 bucket. Use DocumentLocation to specify the bucket name and file name of the document.

StartDocumentAnalysis returns a job identifier (JobId) that you use to get the results of the operation. When text analysis is finished, Amazon Textract publishes a completion status to the Amazon Simple Notification Service (Amazon SNS) topic that you specify in NotificationChannel. To get the results of the text analysis operation, first check that the status value published to the Amazon SNS topic is SUCCEEDED. If so, call GetDocumentAnalysis, and pass the job identifier (JobId) from the initial call to StartDocumentAnalysis.

For more information, see Document Text Analysis.

" }, "StartDocumentTextDetection":{ "name":"StartDocumentTextDetection", "http":{ "method":"POST", "requestUri":"/" }, "input":{"shape":"StartDocumentTextDetectionRequest"}, "output":{"shape":"StartDocumentTextDetectionResponse"}, "errors":[ {"shape":"InvalidParameterException"}, {"shape":"InvalidS3ObjectException"}, {"shape":"InvalidKMSKeyException"}, {"shape":"UnsupportedDocumentException"}, {"shape":"DocumentTooLargeException"}, {"shape":"BadDocumentException"}, {"shape":"AccessDeniedException"}, {"shape":"ProvisionedThroughputExceededException"}, {"shape":"InternalServerError"}, {"shape":"IdempotentParameterMismatchException"}, {"shape":"ThrottlingException"}, {"shape":"LimitExceededException"} ], "documentation":"

Starts the asynchronous detection of text in a document. Amazon Textract can detect lines of text and the words that make up a line of text.

StartDocumentTextDetection can analyze text in documents that are in JPEG, PNG, TIFF, and PDF format. The documents are stored in an Amazon S3 bucket. Use DocumentLocation to specify the bucket name and file name of the document.

StartTextDetection returns a job identifier (JobId) that you use to get the results of the operation. When text detection is finished, Amazon Textract publishes a completion status to the Amazon Simple Notification Service (Amazon SNS) topic that you specify in NotificationChannel. To get the results of the text detection operation, first check that the status value published to the Amazon SNS topic is SUCCEEDED. If so, call GetDocumentTextDetection, and pass the job identifier (JobId) from the initial call to StartDocumentTextDetection.

For more information, see Document Text Detection.

" }, "StartExpenseAnalysis":{ "name":"StartExpenseAnalysis", "http":{ "method":"POST", "requestUri":"/" }, "input":{"shape":"StartExpenseAnalysisRequest"}, "output":{"shape":"StartExpenseAnalysisResponse"}, "errors":[ {"shape":"InvalidParameterException"}, {"shape":"InvalidS3ObjectException"}, {"shape":"InvalidKMSKeyException"}, {"shape":"UnsupportedDocumentException"}, {"shape":"DocumentTooLargeException"}, {"shape":"BadDocumentException"}, {"shape":"AccessDeniedException"}, {"shape":"ProvisionedThroughputExceededException"}, {"shape":"InternalServerError"}, {"shape":"IdempotentParameterMismatchException"}, {"shape":"ThrottlingException"}, {"shape":"LimitExceededException"} ], "documentation":"

Starts the asynchronous analysis of invoices or receipts for data like contact information, items purchased, and vendor names.

StartExpenseAnalysis can analyze text in documents that are in JPEG, PNG, and PDF format. The documents must be stored in an Amazon S3 bucket. Use the DocumentLocation parameter to specify the name of your S3 bucket and the name of the document in that bucket.

StartExpenseAnalysis returns a job identifier (JobId) that you will provide to GetExpenseAnalysis to retrieve the results of the operation. When the analysis of the input invoices/receipts is finished, Amazon Textract publishes a completion status to the Amazon Simple Notification Service (Amazon SNS) topic that you provide to the NotificationChannel. To obtain the results of the invoice and receipt analysis operation, ensure that the status value published to the Amazon SNS topic is SUCCEEDED. If so, call GetExpenseAnalysis, and pass the job identifier (JobId) that was returned by your call to StartExpenseAnalysis.

For more information, see Analyzing Invoices and Receipts.

" }, "StartLendingAnalysis":{ "name":"StartLendingAnalysis", "http":{ "method":"POST", "requestUri":"/" }, "input":{"shape":"StartLendingAnalysisRequest"}, "output":{"shape":"StartLendingAnalysisResponse"}, "errors":[ {"shape":"InvalidParameterException"}, {"shape":"InvalidS3ObjectException"}, {"shape":"InvalidKMSKeyException"}, {"shape":"UnsupportedDocumentException"}, {"shape":"DocumentTooLargeException"}, {"shape":"BadDocumentException"}, {"shape":"AccessDeniedException"}, {"shape":"ProvisionedThroughputExceededException"}, {"shape":"InternalServerError"}, {"shape":"IdempotentParameterMismatchException"}, {"shape":"ThrottlingException"}, {"shape":"LimitExceededException"} ], "documentation":"

Starts the classification and analysis of an input document. StartLendingAnalysis initiates the classification and analysis of a packet of lending documents. StartLendingAnalysis operates on a document file located in an Amazon S3 bucket.

StartLendingAnalysis can analyze text in documents that are in one of the following formats: JPEG, PNG, TIFF, PDF. Use DocumentLocation to specify the bucket name and the file name of the document.

StartLendingAnalysis returns a job identifier (JobId) that you use to get the results of the operation. When the text analysis is finished, Amazon Textract publishes a completion status to the Amazon Simple Notification Service (Amazon SNS) topic that you specify in NotificationChannel. To get the results of the text analysis operation, first check that the status value published to the Amazon SNS topic is SUCCEEDED. If the status is SUCCEEDED you can call either GetLendingAnalysis or GetLendingAnalysisSummary and provide the JobId to obtain the results of the analysis.

If using OutputConfig to specify an Amazon S3 bucket, the output will be contained within the specified prefix in a directory labeled with the job-id. In the directory there are 3 sub-directories:

" } }, "shapes":{ "AccessDeniedException":{ "type":"structure", "members":{ }, "documentation":"

You aren't authorized to perform the action. Use the Amazon Resource Name (ARN) of an authorized user or IAM role to perform the operation.

", "exception":true }, "AnalyzeDocumentRequest":{ "type":"structure", "required":[ "Document", "FeatureTypes" ], "members":{ "Document":{ "shape":"Document", "documentation":"

The input document as base64-encoded bytes or an Amazon S3 object. If you use the AWS CLI to call Amazon Textract operations, you can't pass image bytes. The document must be an image in JPEG, PNG, PDF, or TIFF format.

If you're using an AWS SDK to call Amazon Textract, you might not need to base64-encode image bytes that are passed using the Bytes field.

" }, "FeatureTypes":{ "shape":"FeatureTypes", "documentation":"

A list of the types of analysis to perform. Add TABLES to the list to return information about the tables that are detected in the input document. Add FORMS to return detected form data. Add SIGNATURES to return the locations of detected signatures. To perform both forms and table analysis, add TABLES and FORMS to FeatureTypes. To detect signatures within form data and table data, add SIGNATURES to either TABLES or FORMS. All lines and words detected in the document are included in the response (including text that isn't related to the value of FeatureTypes).

" }, "HumanLoopConfig":{ "shape":"HumanLoopConfig", "documentation":"

Sets the configuration for the human in the loop workflow for analyzing documents.

" }, "QueriesConfig":{ "shape":"QueriesConfig", "documentation":"

Contains Queries and the alias for those Queries, as determined by the input.

" } } }, "AnalyzeDocumentResponse":{ "type":"structure", "members":{ "DocumentMetadata":{ "shape":"DocumentMetadata", "documentation":"

Metadata about the analyzed document. An example is the number of pages.

" }, "Blocks":{ "shape":"BlockList", "documentation":"

The items that are detected and analyzed by AnalyzeDocument.

" }, "HumanLoopActivationOutput":{ "shape":"HumanLoopActivationOutput", "documentation":"

Shows the results of the human in the loop evaluation.

" }, "AnalyzeDocumentModelVersion":{ "shape":"String", "documentation":"

The version of the model used to analyze the document.

" } } }, "AnalyzeExpenseRequest":{ "type":"structure", "required":["Document"], "members":{ "Document":{"shape":"Document"} } }, "AnalyzeExpenseResponse":{ "type":"structure", "members":{ "DocumentMetadata":{"shape":"DocumentMetadata"}, "ExpenseDocuments":{ "shape":"ExpenseDocumentList", "documentation":"

The expenses detected by Amazon Textract.

" } } }, "AnalyzeIDDetections":{ "type":"structure", "required":["Text"], "members":{ "Text":{ "shape":"String", "documentation":"

Text of either the normalized field or value associated with it.

" }, "NormalizedValue":{ "shape":"NormalizedValue", "documentation":"

Only returned for dates, returns the type of value detected and the date written in a more machine readable way.

" }, "Confidence":{ "shape":"Percent", "documentation":"

The confidence score of the detected text.

" } }, "documentation":"

Used to contain the information detected by an AnalyzeID operation.

" }, "AnalyzeIDRequest":{ "type":"structure", "required":["DocumentPages"], "members":{ "DocumentPages":{ "shape":"DocumentPages", "documentation":"

The document being passed to AnalyzeID.

" } } }, "AnalyzeIDResponse":{ "type":"structure", "members":{ "IdentityDocuments":{ "shape":"IdentityDocumentList", "documentation":"

The list of documents processed by AnalyzeID. Includes a number denoting their place in the list and the response structure for the document.

" }, "DocumentMetadata":{"shape":"DocumentMetadata"}, "AnalyzeIDModelVersion":{ "shape":"String", "documentation":"

The version of the AnalyzeIdentity API being used to process documents.

" } } }, "BadDocumentException":{ "type":"structure", "members":{ }, "documentation":"

Amazon Textract isn't able to read the document. For more information on the document limits in Amazon Textract, see limits.

", "exception":true }, "Block":{ "type":"structure", "members":{ "BlockType":{ "shape":"BlockType", "documentation":"

The type of text item that's recognized. In operations for text detection, the following types are returned:

In text analysis operations, the following types are returned:

" }, "Confidence":{ "shape":"Percent", "documentation":"

The confidence score that Amazon Textract has in the accuracy of the recognized text and the accuracy of the geometry points around the recognized text.

" }, "Text":{ "shape":"String", "documentation":"

The word or line of text that's recognized by Amazon Textract.

" }, "TextType":{ "shape":"TextType", "documentation":"

The kind of text that Amazon Textract has detected. Can check for handwritten text and printed text.

" }, "RowIndex":{ "shape":"UInteger", "documentation":"

The row in which a table cell is located. The first row position is 1. RowIndex isn't returned by DetectDocumentText and GetDocumentTextDetection.

" }, "ColumnIndex":{ "shape":"UInteger", "documentation":"

The column in which a table cell appears. The first column position is 1. ColumnIndex isn't returned by DetectDocumentText and GetDocumentTextDetection.

" }, "RowSpan":{ "shape":"UInteger", "documentation":"

The number of rows that a table cell spans. Currently this value is always 1, even if the number of rows spanned is greater than 1. RowSpan isn't returned by DetectDocumentText and GetDocumentTextDetection.

" }, "ColumnSpan":{ "shape":"UInteger", "documentation":"

The number of columns that a table cell spans. Currently this value is always 1, even if the number of columns spanned is greater than 1. ColumnSpan isn't returned by DetectDocumentText and GetDocumentTextDetection.

" }, "Geometry":{ "shape":"Geometry", "documentation":"

The location of the recognized text on the image. It includes an axis-aligned, coarse bounding box that surrounds the text, and a finer-grain polygon for more accurate spatial information.

" }, "Id":{ "shape":"NonEmptyString", "documentation":"

The identifier for the recognized text. The identifier is only unique for a single operation.

" }, "Relationships":{ "shape":"RelationshipList", "documentation":"

A list of child blocks of the current block. For example, a LINE object has child blocks for each WORD block that's part of the line of text. There aren't Relationship objects in the list for relationships that don't exist, such as when the current block has no child blocks. The list size can be the following:

" }, "EntityTypes":{ "shape":"EntityTypes", "documentation":"

The type of entity. The following can be returned:

EntityTypes isn't returned by DetectDocumentText and GetDocumentTextDetection.

" }, "SelectionStatus":{ "shape":"SelectionStatus", "documentation":"

The selection status of a selection element, such as an option button or check box.

" }, "Page":{ "shape":"UInteger", "documentation":"

The page on which a block was detected. Page is returned by synchronous and asynchronous operations. Page values greater than 1 are only returned for multipage documents that are in PDF or TIFF format. A scanned image (JPEG/PNG) provided to an asynchronous operation, even if it contains multiple document pages, is considered a single-page document. This means that for scanned images the value of Page is always 1. Synchronous operations operations will also return a Page value of 1 because every input document is considered to be a single-page document.

" }, "Query":{ "shape":"Query", "documentation":"

" } }, "documentation":"

A Block represents items that are recognized in a document within a group of pixels close to each other. The information returned in a Block object depends on the type of operation. In text detection for documents (for example DetectDocumentText), you get information about the detected words and lines of text. In text analysis (for example AnalyzeDocument), you can also get information about the fields, tables, and selection elements that are detected in the document.

An array of Block objects is returned by both synchronous and asynchronous operations. In synchronous operations, such as DetectDocumentText, the array of Block objects is the entire set of results. In asynchronous operations, such as GetDocumentAnalysis, the array is returned over one or more responses.

For more information, see How Amazon Textract Works.

" }, "BlockList":{ "type":"list", "member":{"shape":"Block"} }, "BlockType":{ "type":"string", "enum":[ "KEY_VALUE_SET", "PAGE", "LINE", "WORD", "TABLE", "CELL", "SELECTION_ELEMENT", "MERGED_CELL", "TITLE", "QUERY", "QUERY_RESULT", "SIGNATURE" ] }, "BoundingBox":{ "type":"structure", "members":{ "Width":{ "shape":"Float", "documentation":"

The width of the bounding box as a ratio of the overall document page width.

" }, "Height":{ "shape":"Float", "documentation":"

The height of the bounding box as a ratio of the overall document page height.

" }, "Left":{ "shape":"Float", "documentation":"

The left coordinate of the bounding box as a ratio of overall document page width.

" }, "Top":{ "shape":"Float", "documentation":"

The top coordinate of the bounding box as a ratio of overall document page height.

" } }, "documentation":"

The bounding box around the detected page, text, key-value pair, table, table cell, or selection element on a document page. The left (x-coordinate) and top (y-coordinate) are coordinates that represent the top and left sides of the bounding box. Note that the upper-left corner of the image is the origin (0,0).

The top and left values returned are ratios of the overall document page size. For example, if the input image is 700 x 200 pixels, and the top-left coordinate of the bounding box is 350 x 50 pixels, the API returns a left value of 0.5 (350/700) and a top value of 0.25 (50/200).

The width and height values represent the dimensions of the bounding box as a ratio of the overall document page dimension. For example, if the document page size is 700 x 200 pixels, and the bounding box width is 70 pixels, the width returned is 0.1.

" }, "ClientRequestToken":{ "type":"string", "max":64, "min":1, "pattern":"^[a-zA-Z0-9-_]+$" }, "ContentClassifier":{ "type":"string", "enum":[ "FreeOfPersonallyIdentifiableInformation", "FreeOfAdultContent" ] }, "ContentClassifiers":{ "type":"list", "member":{"shape":"ContentClassifier"}, "max":256 }, "DetectDocumentTextRequest":{ "type":"structure", "required":["Document"], "members":{ "Document":{ "shape":"Document", "documentation":"

The input document as base64-encoded bytes or an Amazon S3 object. If you use the AWS CLI to call Amazon Textract operations, you can't pass image bytes. The document must be an image in JPEG or PNG format.

If you're using an AWS SDK to call Amazon Textract, you might not need to base64-encode image bytes that are passed using the Bytes field.

" } } }, "DetectDocumentTextResponse":{ "type":"structure", "members":{ "DocumentMetadata":{ "shape":"DocumentMetadata", "documentation":"

Metadata about the document. It contains the number of pages that are detected in the document.

" }, "Blocks":{ "shape":"BlockList", "documentation":"

An array of Block objects that contain the text that's detected in the document.

" }, "DetectDocumentTextModelVersion":{ "shape":"String", "documentation":"

" } } }, "DetectedSignature":{ "type":"structure", "members":{ "Page":{ "shape":"UInteger", "documentation":"

The page a detected signature was found on.

" } }, "documentation":"

A structure that holds information regarding a detected signature on a page.

" }, "DetectedSignatureList":{ "type":"list", "member":{"shape":"DetectedSignature"} }, "Document":{ "type":"structure", "members":{ "Bytes":{ "shape":"ImageBlob", "documentation":"

A blob of base64-encoded document bytes. The maximum size of a document that's provided in a blob of bytes is 5 MB. The document bytes must be in PNG or JPEG format.

If you're using an AWS SDK to call Amazon Textract, you might not need to base64-encode image bytes passed using the Bytes field.

" }, "S3Object":{ "shape":"S3Object", "documentation":"

Identifies an S3 object as the document source. The maximum size of a document that's stored in an S3 bucket is 5 MB.

" } }, "documentation":"

The input document, either as bytes or as an S3 object.

You pass image bytes to an Amazon Textract API operation by using the Bytes property. For example, you would use the Bytes property to pass a document loaded from a local file system. Image bytes passed by using the Bytes property must be base64 encoded. Your code might not need to encode document file bytes if you're using an AWS SDK to call Amazon Textract API operations.

You pass images stored in an S3 bucket to an Amazon Textract API operation by using the S3Object property. Documents stored in an S3 bucket don't need to be base64 encoded.

The AWS Region for the S3 bucket that contains the S3 object must match the AWS Region that you use for Amazon Textract operations.

If you use the AWS CLI to call Amazon Textract operations, passing image bytes using the Bytes property isn't supported. You must first upload the document to an Amazon S3 bucket, and then call the operation using the S3Object property.

For Amazon Textract to process an S3 object, the user must have permission to access the S3 object.

" }, "DocumentGroup":{ "type":"structure", "members":{ "Type":{ "shape":"NonEmptyString", "documentation":"

The type of document that Amazon Textract has detected. See LINK for a list of all types returned by Textract.

" }, "SplitDocuments":{ "shape":"SplitDocumentList", "documentation":"

An array that contains information about the pages of a document, defined by logical boundary.

" }, "DetectedSignatures":{ "shape":"DetectedSignatureList", "documentation":"

A list of the detected signatures found in a document group.

" }, "UndetectedSignatures":{ "shape":"UndetectedSignatureList", "documentation":"

A list of any expected signatures not found in a document group.

" } }, "documentation":"

Summary information about documents grouped by the same document type.

" }, "DocumentGroupList":{ "type":"list", "member":{"shape":"DocumentGroup"} }, "DocumentLocation":{ "type":"structure", "members":{ "S3Object":{ "shape":"S3Object", "documentation":"

The Amazon S3 bucket that contains the input document.

" } }, "documentation":"

The Amazon S3 bucket that contains the document to be processed. It's used by asynchronous operations.

The input document can be an image file in JPEG or PNG format. It can also be a file in PDF format.

" }, "DocumentMetadata":{ "type":"structure", "members":{ "Pages":{ "shape":"UInteger", "documentation":"

The number of pages that are detected in the document.

" } }, "documentation":"

Information about the input document.

" }, "DocumentPages":{ "type":"list", "member":{"shape":"Document"}, "max":2, "min":1 }, "DocumentTooLargeException":{ "type":"structure", "members":{ }, "documentation":"

The document can't be processed because it's too large. The maximum document size for synchronous operations 10 MB. The maximum document size for asynchronous operations is 500 MB for PDF files.

", "exception":true }, "EntityType":{ "type":"string", "enum":[ "KEY", "VALUE", "COLUMN_HEADER" ] }, "EntityTypes":{ "type":"list", "member":{"shape":"EntityType"} }, "ErrorCode":{"type":"string"}, "ExpenseCurrency":{ "type":"structure", "members":{ "Code":{ "shape":"String", "documentation":"

Currency code for detected currency. the current supported codes are:

" }, "Confidence":{ "shape":"Percent", "documentation":"

Percentage confideence in the detected currency.

" } }, "documentation":"

Returns the kind of currency detected.

" }, "ExpenseDetection":{ "type":"structure", "members":{ "Text":{ "shape":"String", "documentation":"

The word or line of text recognized by Amazon Textract

" }, "Geometry":{"shape":"Geometry"}, "Confidence":{ "shape":"Percent", "documentation":"

The confidence in detection, as a percentage

" } }, "documentation":"

An object used to store information about the Value or Label detected by Amazon Textract.

" }, "ExpenseDocument":{ "type":"structure", "members":{ "ExpenseIndex":{ "shape":"UInteger", "documentation":"

Denotes which invoice or receipt in the document the information is coming from. First document will be 1, the second 2, and so on.

" }, "SummaryFields":{ "shape":"ExpenseFieldList", "documentation":"

Any information found outside of a table by Amazon Textract.

" }, "LineItemGroups":{ "shape":"LineItemGroupList", "documentation":"

Information detected on each table of a document, seperated into LineItems.

" }, "Blocks":{ "shape":"BlockList", "documentation":"

This is a block object, the same as reported when DetectDocumentText is run on a document. It provides word level recognition of text.

" } }, "documentation":"

The structure holding all the information returned by AnalyzeExpense

" }, "ExpenseDocumentList":{ "type":"list", "member":{"shape":"ExpenseDocument"} }, "ExpenseField":{ "type":"structure", "members":{ "Type":{ "shape":"ExpenseType", "documentation":"

The implied label of a detected element. Present alongside LabelDetection for explicit elements.

" }, "LabelDetection":{ "shape":"ExpenseDetection", "documentation":"

The explicitly stated label of a detected element.

" }, "ValueDetection":{ "shape":"ExpenseDetection", "documentation":"

The value of a detected element. Present in explicit and implicit elements.

" }, "PageNumber":{ "shape":"UInteger", "documentation":"

The page number the value was detected on.

" }, "Currency":{ "shape":"ExpenseCurrency", "documentation":"

Shows the kind of currency, both the code and confidence associated with any monatary value detected.

" }, "GroupProperties":{ "shape":"ExpenseGroupPropertyList", "documentation":"

Shows which group a response object belongs to, such as whether an address line belongs to the vendor's address or the recipent's address.

" } }, "documentation":"

Breakdown of detected information, seperated into the catagories Type, LabelDetection, and ValueDetection

" }, "ExpenseFieldList":{ "type":"list", "member":{"shape":"ExpenseField"} }, "ExpenseGroupProperty":{ "type":"structure", "members":{ "Types":{ "shape":"StringList", "documentation":"

Informs you on whether the expense group is a name or an address.

" }, "Id":{ "shape":"String", "documentation":"

Provides a group Id number, which will be the same for each in the group.

" } }, "documentation":"

Shows the group that a certain key belongs to. This helps differentiate between names and addresses for different organizations, that can be hard to determine via JSON response.

" }, "ExpenseGroupPropertyList":{ "type":"list", "member":{"shape":"ExpenseGroupProperty"} }, "ExpenseType":{ "type":"structure", "members":{ "Text":{ "shape":"String", "documentation":"

The word or line of text detected by Amazon Textract.

" }, "Confidence":{ "shape":"Percent", "documentation":"

The confidence of accuracy, as a percentage.

" } }, "documentation":"

An object used to store information about the Type detected by Amazon Textract.

" }, "Extraction":{ "type":"structure", "members":{ "LendingDocument":{ "shape":"LendingDocument", "documentation":"

Holds the structured data returned by AnalyzeDocument for lending documents.

" }, "ExpenseDocument":{"shape":"ExpenseDocument"}, "IdentityDocument":{"shape":"IdentityDocument"} }, "documentation":"

Contains information extracted by an analysis operation after using StartLendingAnalysis.

" }, "ExtractionList":{ "type":"list", "member":{"shape":"Extraction"} }, "FeatureType":{ "type":"string", "enum":[ "TABLES", "FORMS", "QUERIES", "SIGNATURES" ] }, "FeatureTypes":{ "type":"list", "member":{"shape":"FeatureType"} }, "Float":{"type":"float"}, "FlowDefinitionArn":{ "type":"string", "max":256 }, "Geometry":{ "type":"structure", "members":{ "BoundingBox":{ "shape":"BoundingBox", "documentation":"

An axis-aligned coarse representation of the location of the recognized item on the document page.

" }, "Polygon":{ "shape":"Polygon", "documentation":"

Within the bounding box, a fine-grained polygon around the recognized item.

" } }, "documentation":"

Information about where the following items are located on a document page: detected page, text, key-value pairs, tables, table cells, and selection elements.

" }, "GetDocumentAnalysisRequest":{ "type":"structure", "required":["JobId"], "members":{ "JobId":{ "shape":"JobId", "documentation":"

A unique identifier for the text-detection job. The JobId is returned from StartDocumentAnalysis. A JobId value is only valid for 7 days.

" }, "MaxResults":{ "shape":"MaxResults", "documentation":"

The maximum number of results to return per paginated call. The largest value that you can specify is 1,000. If you specify a value greater than 1,000, a maximum of 1,000 results is returned. The default value is 1,000.

" }, "NextToken":{ "shape":"PaginationToken", "documentation":"

If the previous response was incomplete (because there are more blocks to retrieve), Amazon Textract returns a pagination token in the response. You can use this pagination token to retrieve the next set of blocks.

" } } }, "GetDocumentAnalysisResponse":{ "type":"structure", "members":{ "DocumentMetadata":{ "shape":"DocumentMetadata", "documentation":"

Information about a document that Amazon Textract processed. DocumentMetadata is returned in every page of paginated responses from an Amazon Textract video operation.

" }, "JobStatus":{ "shape":"JobStatus", "documentation":"

The current status of the text detection job.

" }, "NextToken":{ "shape":"PaginationToken", "documentation":"

If the response is truncated, Amazon Textract returns this token. You can use this token in the subsequent request to retrieve the next set of text detection results.

" }, "Blocks":{ "shape":"BlockList", "documentation":"

The results of the text-analysis operation.

" }, "Warnings":{ "shape":"Warnings", "documentation":"

A list of warnings that occurred during the document-analysis operation.

" }, "StatusMessage":{ "shape":"StatusMessage", "documentation":"

Returns if the detection job could not be completed. Contains explanation for what error occured.

" }, "AnalyzeDocumentModelVersion":{ "shape":"String", "documentation":"

" } } }, "GetDocumentTextDetectionRequest":{ "type":"structure", "required":["JobId"], "members":{ "JobId":{ "shape":"JobId", "documentation":"

A unique identifier for the text detection job. The JobId is returned from StartDocumentTextDetection. A JobId value is only valid for 7 days.

" }, "MaxResults":{ "shape":"MaxResults", "documentation":"

The maximum number of results to return per paginated call. The largest value you can specify is 1,000. If you specify a value greater than 1,000, a maximum of 1,000 results is returned. The default value is 1,000.

" }, "NextToken":{ "shape":"PaginationToken", "documentation":"

If the previous response was incomplete (because there are more blocks to retrieve), Amazon Textract returns a pagination token in the response. You can use this pagination token to retrieve the next set of blocks.

" } } }, "GetDocumentTextDetectionResponse":{ "type":"structure", "members":{ "DocumentMetadata":{ "shape":"DocumentMetadata", "documentation":"

Information about a document that Amazon Textract processed. DocumentMetadata is returned in every page of paginated responses from an Amazon Textract video operation.

" }, "JobStatus":{ "shape":"JobStatus", "documentation":"

The current status of the text detection job.

" }, "NextToken":{ "shape":"PaginationToken", "documentation":"

If the response is truncated, Amazon Textract returns this token. You can use this token in the subsequent request to retrieve the next set of text-detection results.

" }, "Blocks":{ "shape":"BlockList", "documentation":"

The results of the text-detection operation.

" }, "Warnings":{ "shape":"Warnings", "documentation":"

A list of warnings that occurred during the text-detection operation for the document.

" }, "StatusMessage":{ "shape":"StatusMessage", "documentation":"

Returns if the detection job could not be completed. Contains explanation for what error occured.

" }, "DetectDocumentTextModelVersion":{ "shape":"String", "documentation":"

" } } }, "GetExpenseAnalysisRequest":{ "type":"structure", "required":["JobId"], "members":{ "JobId":{ "shape":"JobId", "documentation":"

A unique identifier for the text detection job. The JobId is returned from StartExpenseAnalysis. A JobId value is only valid for 7 days.

" }, "MaxResults":{ "shape":"MaxResults", "documentation":"

The maximum number of results to return per paginated call. The largest value you can specify is 20. If you specify a value greater than 20, a maximum of 20 results is returned. The default value is 20.

" }, "NextToken":{ "shape":"PaginationToken", "documentation":"

If the previous response was incomplete (because there are more blocks to retrieve), Amazon Textract returns a pagination token in the response. You can use this pagination token to retrieve the next set of blocks.

" } } }, "GetExpenseAnalysisResponse":{ "type":"structure", "members":{ "DocumentMetadata":{ "shape":"DocumentMetadata", "documentation":"

Information about a document that Amazon Textract processed. DocumentMetadata is returned in every page of paginated responses from an Amazon Textract operation.

" }, "JobStatus":{ "shape":"JobStatus", "documentation":"

The current status of the text detection job.

" }, "NextToken":{ "shape":"PaginationToken", "documentation":"

If the response is truncated, Amazon Textract returns this token. You can use this token in the subsequent request to retrieve the next set of text-detection results.

" }, "ExpenseDocuments":{ "shape":"ExpenseDocumentList", "documentation":"

The expenses detected by Amazon Textract.

" }, "Warnings":{ "shape":"Warnings", "documentation":"

A list of warnings that occurred during the text-detection operation for the document.

" }, "StatusMessage":{ "shape":"StatusMessage", "documentation":"

Returns if the detection job could not be completed. Contains explanation for what error occured.

" }, "AnalyzeExpenseModelVersion":{ "shape":"String", "documentation":"

The current model version of AnalyzeExpense.

" } } }, "GetLendingAnalysisRequest":{ "type":"structure", "required":["JobId"], "members":{ "JobId":{ "shape":"JobId", "documentation":"

A unique identifier for the lending or text-detection job. The JobId is returned from StartLendingAnalysis. A JobId value is only valid for 7 days.

" }, "MaxResults":{ "shape":"MaxResults", "documentation":"

The maximum number of results to return per paginated call. The largest value that you can specify is 30. If you specify a value greater than 30, a maximum of 30 results is returned. The default value is 30.

" }, "NextToken":{ "shape":"PaginationToken", "documentation":"

If the previous response was incomplete, Amazon Textract returns a pagination token in the response. You can use this pagination token to retrieve the next set of lending results.

" } } }, "GetLendingAnalysisResponse":{ "type":"structure", "members":{ "DocumentMetadata":{"shape":"DocumentMetadata"}, "JobStatus":{ "shape":"JobStatus", "documentation":"

The current status of the lending analysis job.

" }, "NextToken":{ "shape":"PaginationToken", "documentation":"

If the response is truncated, Amazon Textract returns this token. You can use this token in the subsequent request to retrieve the next set of lending results.

" }, "Results":{ "shape":"LendingResultList", "documentation":"

Holds the information returned by one of AmazonTextract's document analysis operations for the pinstripe.

" }, "Warnings":{ "shape":"Warnings", "documentation":"

A list of warnings that occurred during the lending analysis operation.

" }, "StatusMessage":{ "shape":"StatusMessage", "documentation":"

Returns if the lending analysis job could not be completed. Contains explanation for what error occurred.

" }, "AnalyzeLendingModelVersion":{ "shape":"String", "documentation":"

The current model version of the Analyze Lending API.

" } } }, "GetLendingAnalysisSummaryRequest":{ "type":"structure", "required":["JobId"], "members":{ "JobId":{ "shape":"JobId", "documentation":"

A unique identifier for the lending or text-detection job. The JobId is returned from StartLendingAnalysis. A JobId value is only valid for 7 days.

" } } }, "GetLendingAnalysisSummaryResponse":{ "type":"structure", "members":{ "DocumentMetadata":{"shape":"DocumentMetadata"}, "JobStatus":{ "shape":"JobStatus", "documentation":"

The current status of the lending analysis job.

" }, "Summary":{ "shape":"LendingSummary", "documentation":"

Contains summary information for documents grouped by type.

" }, "Warnings":{ "shape":"Warnings", "documentation":"

A list of warnings that occurred during the lending analysis operation.

" }, "StatusMessage":{ "shape":"StatusMessage", "documentation":"

Returns if the lending analysis could not be completed. Contains explanation for what error occurred.

" }, "AnalyzeLendingModelVersion":{ "shape":"String", "documentation":"

The current model version of the Analyze Lending API.

" } } }, "HumanLoopActivationConditionsEvaluationResults":{ "type":"string", "max":10240 }, "HumanLoopActivationOutput":{ "type":"structure", "members":{ "HumanLoopArn":{ "shape":"HumanLoopArn", "documentation":"

The Amazon Resource Name (ARN) of the HumanLoop created.

" }, "HumanLoopActivationReasons":{ "shape":"HumanLoopActivationReasons", "documentation":"

Shows if and why human review was needed.

" }, "HumanLoopActivationConditionsEvaluationResults":{ "shape":"HumanLoopActivationConditionsEvaluationResults", "documentation":"

Shows the result of condition evaluations, including those conditions which activated a human review.

", "jsonvalue":true } }, "documentation":"

Shows the results of the human in the loop evaluation. If there is no HumanLoopArn, the input did not trigger human review.

" }, "HumanLoopActivationReason":{"type":"string"}, "HumanLoopActivationReasons":{ "type":"list", "member":{"shape":"HumanLoopActivationReason"}, "min":1 }, "HumanLoopArn":{ "type":"string", "max":256 }, "HumanLoopConfig":{ "type":"structure", "required":[ "HumanLoopName", "FlowDefinitionArn" ], "members":{ "HumanLoopName":{ "shape":"HumanLoopName", "documentation":"

The name of the human workflow used for this image. This should be kept unique within a region.

" }, "FlowDefinitionArn":{ "shape":"FlowDefinitionArn", "documentation":"

The Amazon Resource Name (ARN) of the flow definition.

" }, "DataAttributes":{ "shape":"HumanLoopDataAttributes", "documentation":"

Sets attributes of the input data.

" } }, "documentation":"

Sets up the human review workflow the document will be sent to if one of the conditions is met. You can also set certain attributes of the image before review.

" }, "HumanLoopDataAttributes":{ "type":"structure", "members":{ "ContentClassifiers":{ "shape":"ContentClassifiers", "documentation":"

Sets whether the input image is free of personally identifiable information or adult content.

" } }, "documentation":"

Allows you to set attributes of the image. Currently, you can declare an image as free of personally identifiable information and adult content.

" }, "HumanLoopName":{ "type":"string", "max":63, "min":1, "pattern":"^[a-z0-9](-*[a-z0-9])*" }, "HumanLoopQuotaExceededException":{ "type":"structure", "members":{ "ResourceType":{ "shape":"String", "documentation":"

The resource type.

" }, "QuotaCode":{ "shape":"String", "documentation":"

The quota code.

" }, "ServiceCode":{ "shape":"String", "documentation":"

The service code.

" } }, "documentation":"

Indicates you have exceeded the maximum number of active human in the loop workflows available

", "exception":true }, "IdList":{ "type":"list", "member":{"shape":"NonEmptyString"} }, "IdempotentParameterMismatchException":{ "type":"structure", "members":{ }, "documentation":"

A ClientRequestToken input parameter was reused with an operation, but at least one of the other input parameters is different from the previous call to the operation.

", "exception":true }, "IdentityDocument":{ "type":"structure", "members":{ "DocumentIndex":{ "shape":"UInteger", "documentation":"

Denotes the placement of a document in the IdentityDocument list. The first document is marked 1, the second 2 and so on.

" }, "IdentityDocumentFields":{ "shape":"IdentityDocumentFieldList", "documentation":"

The structure used to record information extracted from identity documents. Contains both normalized field and value of the extracted text.

" }, "Blocks":{ "shape":"BlockList", "documentation":"

Individual word recognition, as returned by document detection.

" } }, "documentation":"

The structure that lists each document processed in an AnalyzeID operation.

" }, "IdentityDocumentField":{ "type":"structure", "members":{ "Type":{"shape":"AnalyzeIDDetections"}, "ValueDetection":{"shape":"AnalyzeIDDetections"} }, "documentation":"

Structure containing both the normalized type of the extracted information and the text associated with it. These are extracted as Type and Value respectively.

" }, "IdentityDocumentFieldList":{ "type":"list", "member":{"shape":"IdentityDocumentField"} }, "IdentityDocumentList":{ "type":"list", "member":{"shape":"IdentityDocument"} }, "ImageBlob":{ "type":"blob", "max":10485760, "min":1 }, "InternalServerError":{ "type":"structure", "members":{ }, "documentation":"

Amazon Textract experienced a service issue. Try your call again.

", "exception":true, "fault":true }, "InvalidJobIdException":{ "type":"structure", "members":{ }, "documentation":"

An invalid job identifier was passed to an asynchronous analysis operation.

", "exception":true }, "InvalidKMSKeyException":{ "type":"structure", "members":{ }, "documentation":"

Indicates you do not have decrypt permissions with the KMS key entered, or the KMS key was entered incorrectly.

", "exception":true }, "InvalidParameterException":{ "type":"structure", "members":{ }, "documentation":"

An input parameter violated a constraint. For example, in synchronous operations, an InvalidParameterException exception occurs when neither of the S3Object or Bytes values are supplied in the Document request parameter. Validate your parameter before calling the API operation again.

", "exception":true }, "InvalidS3ObjectException":{ "type":"structure", "members":{ }, "documentation":"

Amazon Textract is unable to access the S3 object that's specified in the request. for more information, Configure Access to Amazon S3 For troubleshooting information, see Troubleshooting Amazon S3

", "exception":true }, "JobId":{ "type":"string", "max":64, "min":1, "pattern":"^[a-zA-Z0-9-_]+$" }, "JobStatus":{ "type":"string", "enum":[ "IN_PROGRESS", "SUCCEEDED", "FAILED", "PARTIAL_SUCCESS" ] }, "JobTag":{ "type":"string", "max":64, "min":1, "pattern":"[a-zA-Z0-9_.\\-:]+" }, "KMSKeyId":{ "type":"string", "max":2048, "min":1, "pattern":"^[A-Za-z0-9][A-Za-z0-9:_/+=,@.-]{0,2048}$" }, "LendingDetection":{ "type":"structure", "members":{ "Text":{ "shape":"String", "documentation":"

The text extracted for a detected value in a lending document.

" }, "SelectionStatus":{ "shape":"SelectionStatus", "documentation":"

The selection status of a selection element, such as an option button or check box.

" }, "Geometry":{"shape":"Geometry"}, "Confidence":{ "shape":"Percent", "documentation":"

The confidence level for the text of a detected value in a lending document.

" } }, "documentation":"

The results extracted for a lending document.

" }, "LendingDetectionList":{ "type":"list", "member":{"shape":"LendingDetection"} }, "LendingDocument":{ "type":"structure", "members":{ "LendingFields":{ "shape":"LendingFieldList", "documentation":"

An array of LendingField objects.

" }, "SignatureDetections":{ "shape":"SignatureDetectionList", "documentation":"

A list of signatures detected in a lending document.

" } }, "documentation":"

Holds the structured data returned by AnalyzeDocument for lending documents.

" }, "LendingField":{ "type":"structure", "members":{ "Type":{ "shape":"String", "documentation":"

The type of the lending document.

" }, "KeyDetection":{"shape":"LendingDetection"}, "ValueDetections":{ "shape":"LendingDetectionList", "documentation":"

An array of LendingDetection objects.

" } }, "documentation":"

Holds the normalized key-value pairs returned by AnalyzeDocument, including the document type, detected text, and geometry.

" }, "LendingFieldList":{ "type":"list", "member":{"shape":"LendingField"} }, "LendingResult":{ "type":"structure", "members":{ "Page":{ "shape":"UInteger", "documentation":"

The page number for a page, with regard to whole submission.

" }, "PageClassification":{ "shape":"PageClassification", "documentation":"

The classifier result for a given page.

" }, "Extractions":{ "shape":"ExtractionList", "documentation":"

An array of Extraction to hold structured data. e.g. normalized key value pairs instead of raw OCR detections .

" } }, "documentation":"

Contains the detections for each page analyzed through the Analyze Lending API.

" }, "LendingResultList":{ "type":"list", "member":{"shape":"LendingResult"} }, "LendingSummary":{ "type":"structure", "members":{ "DocumentGroups":{ "shape":"DocumentGroupList", "documentation":"

Contains an array of all DocumentGroup objects.

" }, "UndetectedDocumentTypes":{ "shape":"UndetectedDocumentTypeList", "documentation":"

UndetectedDocumentTypes.

" } }, "documentation":"

Contains information regarding DocumentGroups and UndetectedDocumentTypes.

" }, "LimitExceededException":{ "type":"structure", "members":{ }, "documentation":"

An Amazon Textract service limit was exceeded. For example, if you start too many asynchronous jobs concurrently, calls to start operations (StartDocumentTextDetection, for example) raise a LimitExceededException exception (HTTP status code: 400) until the number of concurrently running jobs is below the Amazon Textract service limit.

", "exception":true }, "LineItemFields":{ "type":"structure", "members":{ "LineItemExpenseFields":{ "shape":"ExpenseFieldList", "documentation":"

ExpenseFields used to show information from detected lines on a table.

" } }, "documentation":"

A structure that holds information about the different lines found in a document's tables.

" }, "LineItemGroup":{ "type":"structure", "members":{ "LineItemGroupIndex":{ "shape":"UInteger", "documentation":"

The number used to identify a specific table in a document. The first table encountered will have a LineItemGroupIndex of 1, the second 2, etc.

" }, "LineItems":{ "shape":"LineItemList", "documentation":"

The breakdown of information on a particular line of a table.

" } }, "documentation":"

A grouping of tables which contain LineItems, with each table identified by the table's LineItemGroupIndex.

" }, "LineItemGroupList":{ "type":"list", "member":{"shape":"LineItemGroup"} }, "LineItemList":{ "type":"list", "member":{"shape":"LineItemFields"} }, "MaxResults":{ "type":"integer", "min":1 }, "NonEmptyString":{ "type":"string", "pattern":".*\\S.*" }, "NormalizedValue":{ "type":"structure", "members":{ "Value":{ "shape":"String", "documentation":"

The value of the date, written as Year-Month-DayTHour:Minute:Second.

" }, "ValueType":{ "shape":"ValueType", "documentation":"

The normalized type of the value detected. In this case, DATE.

" } }, "documentation":"

Contains information relating to dates in a document, including the type of value, and the value.

" }, "NotificationChannel":{ "type":"structure", "required":[ "SNSTopicArn", "RoleArn" ], "members":{ "SNSTopicArn":{ "shape":"SNSTopicArn", "documentation":"

The Amazon SNS topic that Amazon Textract posts the completion status to.

" }, "RoleArn":{ "shape":"RoleArn", "documentation":"

The Amazon Resource Name (ARN) of an IAM role that gives Amazon Textract publishing permissions to the Amazon SNS topic.

" } }, "documentation":"

The Amazon Simple Notification Service (Amazon SNS) topic to which Amazon Textract publishes the completion status of an asynchronous document operation.

" }, "OutputConfig":{ "type":"structure", "required":["S3Bucket"], "members":{ "S3Bucket":{ "shape":"S3Bucket", "documentation":"

The name of the bucket your output will go to.

" }, "S3Prefix":{ "shape":"S3ObjectName", "documentation":"

The prefix of the object key that the output will be saved to. When not enabled, the prefix will be “textract_output\".

" } }, "documentation":"

Sets whether or not your output will go to a user created bucket. Used to set the name of the bucket, and the prefix on the output file.

OutputConfig is an optional parameter which lets you adjust where your output will be placed. By default, Amazon Textract will store the results internally and can only be accessed by the Get API operations. With OutputConfig enabled, you can set the name of the bucket the output will be sent to the file prefix of the results where you can download your results. Additionally, you can set the KMSKeyID parameter to a customer master key (CMK) to encrypt your output. Without this parameter set Amazon Textract will encrypt server-side using the AWS managed CMK for Amazon S3.

Decryption of Customer Content is necessary for processing of the documents by Amazon Textract. If your account is opted out under an AI services opt out policy then all unencrypted Customer Content is immediately and permanently deleted after the Customer Content has been processed by the service. No copy of of the output is retained by Amazon Textract. For information about how to opt out, see Managing AI services opt-out policy.

For more information on data privacy, see the Data Privacy FAQ.

" }, "PageClassification":{ "type":"structure", "required":[ "PageType", "PageNumber" ], "members":{ "PageType":{ "shape":"PredictionList", "documentation":"

The class, or document type, assigned to a detected Page object. The class, or document type, assigned to a detected Page object.

" }, "PageNumber":{ "shape":"PredictionList", "documentation":"

The page number the value was detected on, relative to Amazon Textract's starting position.

" } }, "documentation":"

The class assigned to a Page object detected in an input document. Contains information regarding the predicted type/class of a document's page and the page number that the Page object was detected on.

" }, "PageList":{ "type":"list", "member":{"shape":"UInteger"} }, "Pages":{ "type":"list", "member":{"shape":"UInteger"} }, "PaginationToken":{ "type":"string", "max":255, "min":1, "pattern":".*\\S.*" }, "Percent":{ "type":"float", "max":100, "min":0 }, "Point":{ "type":"structure", "members":{ "X":{ "shape":"Float", "documentation":"

The value of the X coordinate for a point on a Polygon.

" }, "Y":{ "shape":"Float", "documentation":"

The value of the Y coordinate for a point on a Polygon.

" } }, "documentation":"

The X and Y coordinates of a point on a document page. The X and Y values that are returned are ratios of the overall document page size. For example, if the input document is 700 x 200 and the operation returns X=0.5 and Y=0.25, then the point is at the (350,50) pixel coordinate on the document page.

An array of Point objects, Polygon, is returned by DetectDocumentText. Polygon represents a fine-grained polygon around detected text. For more information, see Geometry in the Amazon Textract Developer Guide.

" }, "Polygon":{ "type":"list", "member":{"shape":"Point"} }, "Prediction":{ "type":"structure", "members":{ "Value":{ "shape":"NonEmptyString", "documentation":"

The predicted value of a detected object.

" }, "Confidence":{ "shape":"Percent", "documentation":"

Amazon Textract's confidence in its predicted value.

" } }, "documentation":"

Contains information regarding predicted values returned by Amazon Textract operations, including the predicted value and the confidence in the predicted value.

" }, "PredictionList":{ "type":"list", "member":{"shape":"Prediction"} }, "ProvisionedThroughputExceededException":{ "type":"structure", "members":{ }, "documentation":"

The number of requests exceeded your throughput limit. If you want to increase this limit, contact Amazon Textract.

", "exception":true }, "Queries":{ "type":"list", "member":{"shape":"Query"}, "min":1 }, "QueriesConfig":{ "type":"structure", "required":["Queries"], "members":{ "Queries":{ "shape":"Queries", "documentation":"

" } }, "documentation":"

" }, "Query":{ "type":"structure", "required":["Text"], "members":{ "Text":{ "shape":"QueryInput", "documentation":"

Question that Amazon Textract will apply to the document. An example would be \"What is the customer's SSN?\"

" }, "Alias":{ "shape":"QueryInput", "documentation":"

Alias attached to the query, for ease of location.

" }, "Pages":{ "shape":"QueryPages", "documentation":"

Pages is a parameter that the user inputs to specify which pages to apply a query to. The following is a list of rules for using this parameter.

" } }, "documentation":"

Each query contains the question you want to ask in the Text and the alias you want to associate.

" }, "QueryInput":{ "type":"string", "max":200, "min":1, "pattern":"^[a-zA-Z0-9\\s!\"\\#\\$%'&\\(\\)\\*\\+\\,\\-\\./:;=\\?@\\[\\\\\\]\\^_`\\{\\|\\}~><]+$" }, "QueryPage":{ "type":"string", "max":9, "min":1, "pattern":"^[0-9\\*\\-]+$" }, "QueryPages":{ "type":"list", "member":{"shape":"QueryPage"}, "min":1 }, "Relationship":{ "type":"structure", "members":{ "Type":{ "shape":"RelationshipType", "documentation":"

The type of relationship that the blocks in the IDs array have with the current block. The relationship can be VALUE or CHILD. A relationship of type VALUE is a list that contains the ID of the VALUE block that's associated with the KEY of a key-value pair. A relationship of type CHILD is a list of IDs that identify WORD blocks in the case of lines Cell blocks in the case of Tables, and WORD blocks in the case of Selection Elements.

" }, "Ids":{ "shape":"IdList", "documentation":"

An array of IDs for related blocks. You can get the type of the relationship from the Type element.

" } }, "documentation":"

Information about how blocks are related to each other. A Block object contains 0 or more Relation objects in a list, Relationships. For more information, see Block.

The Type element provides the type of the relationship for all blocks in the IDs array.

" }, "RelationshipList":{ "type":"list", "member":{"shape":"Relationship"} }, "RelationshipType":{ "type":"string", "enum":[ "VALUE", "CHILD", "COMPLEX_FEATURES", "MERGED_CELL", "TITLE", "ANSWER" ] }, "RoleArn":{ "type":"string", "max":2048, "min":20, "pattern":"arn:([a-z\\d-]+):iam::\\d{12}:role/?[a-zA-Z_0-9+=,.@\\-_/]+" }, "S3Bucket":{ "type":"string", "max":255, "min":3, "pattern":"[0-9A-Za-z\\.\\-_]*" }, "S3Object":{ "type":"structure", "members":{ "Bucket":{ "shape":"S3Bucket", "documentation":"

The name of the S3 bucket. Note that the # character is not valid in the file name.

" }, "Name":{ "shape":"S3ObjectName", "documentation":"

The file name of the input document. Synchronous operations can use image files that are in JPEG or PNG format. Asynchronous operations also support PDF and TIFF format files.

" }, "Version":{ "shape":"S3ObjectVersion", "documentation":"

If the bucket has versioning enabled, you can specify the object version.

" } }, "documentation":"

The S3 bucket name and file name that identifies the document.

The AWS Region for the S3 bucket that contains the document must match the Region that you use for Amazon Textract operations.

For Amazon Textract to process a file in an S3 bucket, the user must have permission to access the S3 bucket and file.

" }, "S3ObjectName":{ "type":"string", "max":1024, "min":1, "pattern":".*\\S.*" }, "S3ObjectVersion":{ "type":"string", "max":1024, "min":1, "pattern":".*\\S.*" }, "SNSTopicArn":{ "type":"string", "max":1024, "min":20, "pattern":"(^arn:([a-z\\d-]+):sns:[a-zA-Z\\d-]{1,20}:\\w{12}:.+$)" }, "SelectionStatus":{ "type":"string", "enum":[ "SELECTED", "NOT_SELECTED" ] }, "SignatureDetection":{ "type":"structure", "members":{ "Confidence":{ "shape":"Percent", "documentation":"

The confidence, from 0 to 100, in the predicted values for a detected signature.

" }, "Geometry":{"shape":"Geometry"} }, "documentation":"

Information regarding a detected signature on a page.

" }, "SignatureDetectionList":{ "type":"list", "member":{"shape":"SignatureDetection"} }, "SplitDocument":{ "type":"structure", "members":{ "Index":{ "shape":"UInteger", "documentation":"

The index for a given document in a DocumentGroup of a specific Type.

" }, "Pages":{ "shape":"PageList", "documentation":"

An array of page numbers for a for a given document, ordered by logical boundary.

" } }, "documentation":"

Contains information about the pages of a document, defined by logical boundary.

" }, "SplitDocumentList":{ "type":"list", "member":{"shape":"SplitDocument"} }, "StartDocumentAnalysisRequest":{ "type":"structure", "required":[ "DocumentLocation", "FeatureTypes" ], "members":{ "DocumentLocation":{ "shape":"DocumentLocation", "documentation":"

The location of the document to be processed.

" }, "FeatureTypes":{ "shape":"FeatureTypes", "documentation":"

A list of the types of analysis to perform. Add TABLES to the list to return information about the tables that are detected in the input document. Add FORMS to return detected form data. To perform both types of analysis, add TABLES and FORMS to FeatureTypes. All lines and words detected in the document are included in the response (including text that isn't related to the value of FeatureTypes).

" }, "ClientRequestToken":{ "shape":"ClientRequestToken", "documentation":"

The idempotent token that you use to identify the start request. If you use the same token with multiple StartDocumentAnalysis requests, the same JobId is returned. Use ClientRequestToken to prevent the same job from being accidentally started more than once. For more information, see Calling Amazon Textract Asynchronous Operations.

" }, "JobTag":{ "shape":"JobTag", "documentation":"

An identifier that you specify that's included in the completion notification published to the Amazon SNS topic. For example, you can use JobTag to identify the type of document that the completion notification corresponds to (such as a tax form or a receipt).

" }, "NotificationChannel":{ "shape":"NotificationChannel", "documentation":"

The Amazon SNS topic ARN that you want Amazon Textract to publish the completion status of the operation to.

" }, "OutputConfig":{ "shape":"OutputConfig", "documentation":"

Sets if the output will go to a customer defined bucket. By default, Amazon Textract will save the results internally to be accessed by the GetDocumentAnalysis operation.

" }, "KMSKeyId":{ "shape":"KMSKeyId", "documentation":"

The KMS key used to encrypt the inference results. This can be in either Key ID or Key Alias format. When a KMS key is provided, the KMS key will be used for server-side encryption of the objects in the customer bucket. When this parameter is not enabled, the result will be encrypted server side,using SSE-S3.

" }, "QueriesConfig":{"shape":"QueriesConfig"} } }, "StartDocumentAnalysisResponse":{ "type":"structure", "members":{ "JobId":{ "shape":"JobId", "documentation":"

The identifier for the document text detection job. Use JobId to identify the job in a subsequent call to GetDocumentAnalysis. A JobId value is only valid for 7 days.

" } } }, "StartDocumentTextDetectionRequest":{ "type":"structure", "required":["DocumentLocation"], "members":{ "DocumentLocation":{ "shape":"DocumentLocation", "documentation":"

The location of the document to be processed.

" }, "ClientRequestToken":{ "shape":"ClientRequestToken", "documentation":"

The idempotent token that's used to identify the start request. If you use the same token with multiple StartDocumentTextDetection requests, the same JobId is returned. Use ClientRequestToken to prevent the same job from being accidentally started more than once. For more information, see Calling Amazon Textract Asynchronous Operations.

" }, "JobTag":{ "shape":"JobTag", "documentation":"

An identifier that you specify that's included in the completion notification published to the Amazon SNS topic. For example, you can use JobTag to identify the type of document that the completion notification corresponds to (such as a tax form or a receipt).

" }, "NotificationChannel":{ "shape":"NotificationChannel", "documentation":"

The Amazon SNS topic ARN that you want Amazon Textract to publish the completion status of the operation to.

" }, "OutputConfig":{ "shape":"OutputConfig", "documentation":"

Sets if the output will go to a customer defined bucket. By default Amazon Textract will save the results internally to be accessed with the GetDocumentTextDetection operation.

" }, "KMSKeyId":{ "shape":"KMSKeyId", "documentation":"

The KMS key used to encrypt the inference results. This can be in either Key ID or Key Alias format. When a KMS key is provided, the KMS key will be used for server-side encryption of the objects in the customer bucket. When this parameter is not enabled, the result will be encrypted server side,using SSE-S3.

" } } }, "StartDocumentTextDetectionResponse":{ "type":"structure", "members":{ "JobId":{ "shape":"JobId", "documentation":"

The identifier of the text detection job for the document. Use JobId to identify the job in a subsequent call to GetDocumentTextDetection. A JobId value is only valid for 7 days.

" } } }, "StartExpenseAnalysisRequest":{ "type":"structure", "required":["DocumentLocation"], "members":{ "DocumentLocation":{ "shape":"DocumentLocation", "documentation":"

The location of the document to be processed.

" }, "ClientRequestToken":{ "shape":"ClientRequestToken", "documentation":"

The idempotent token that's used to identify the start request. If you use the same token with multiple StartDocumentTextDetection requests, the same JobId is returned. Use ClientRequestToken to prevent the same job from being accidentally started more than once. For more information, see Calling Amazon Textract Asynchronous Operations

" }, "JobTag":{ "shape":"JobTag", "documentation":"

An identifier you specify that's included in the completion notification published to the Amazon SNS topic. For example, you can use JobTag to identify the type of document that the completion notification corresponds to (such as a tax form or a receipt).

" }, "NotificationChannel":{ "shape":"NotificationChannel", "documentation":"

The Amazon SNS topic ARN that you want Amazon Textract to publish the completion status of the operation to.

" }, "OutputConfig":{ "shape":"OutputConfig", "documentation":"

Sets if the output will go to a customer defined bucket. By default, Amazon Textract will save the results internally to be accessed by the GetExpenseAnalysis operation.

" }, "KMSKeyId":{ "shape":"KMSKeyId", "documentation":"

The KMS key used to encrypt the inference results. This can be in either Key ID or Key Alias format. When a KMS key is provided, the KMS key will be used for server-side encryption of the objects in the customer bucket. When this parameter is not enabled, the result will be encrypted server side,using SSE-S3.

" } } }, "StartExpenseAnalysisResponse":{ "type":"structure", "members":{ "JobId":{ "shape":"JobId", "documentation":"

A unique identifier for the text detection job. The JobId is returned from StartExpenseAnalysis. A JobId value is only valid for 7 days.

" } } }, "StartLendingAnalysisRequest":{ "type":"structure", "required":["DocumentLocation"], "members":{ "DocumentLocation":{"shape":"DocumentLocation"}, "ClientRequestToken":{ "shape":"ClientRequestToken", "documentation":"

The idempotent token that you use to identify the start request. If you use the same token with multiple StartLendingAnalysis requests, the same JobId is returned. Use ClientRequestToken to prevent the same job from being accidentally started more than once. For more information, see Calling Amazon Textract Asynchronous Operations.

" }, "JobTag":{ "shape":"JobTag", "documentation":"

An identifier that you specify to be included in the completion notification published to the Amazon SNS topic. For example, you can use JobTag to identify the type of document that the completion notification corresponds to (such as a tax form or a receipt).

" }, "NotificationChannel":{"shape":"NotificationChannel"}, "OutputConfig":{"shape":"OutputConfig"}, "KMSKeyId":{ "shape":"KMSKeyId", "documentation":"

The KMS key used to encrypt the inference results. This can be in either Key ID or Key Alias format. When a KMS key is provided, the KMS key will be used for server-side encryption of the objects in the customer bucket. When this parameter is not enabled, the result will be encrypted server side, using SSE-S3.

" } } }, "StartLendingAnalysisResponse":{ "type":"structure", "members":{ "JobId":{ "shape":"JobId", "documentation":"

A unique identifier for the lending or text-detection job. The JobId is returned from StartLendingAnalysis. A JobId value is only valid for 7 days.

" } } }, "StatusMessage":{"type":"string"}, "String":{"type":"string"}, "StringList":{ "type":"list", "member":{"shape":"String"} }, "TextType":{ "type":"string", "enum":[ "HANDWRITING", "PRINTED" ] }, "ThrottlingException":{ "type":"structure", "members":{ }, "documentation":"

Amazon Textract is temporarily unable to process the request. Try your call again.

", "exception":true, "fault":true }, "UInteger":{ "type":"integer", "min":0 }, "UndetectedDocumentTypeList":{ "type":"list", "member":{"shape":"NonEmptyString"} }, "UndetectedSignature":{ "type":"structure", "members":{ "Page":{ "shape":"UInteger", "documentation":"

The page where a signature was expected but not found.

" } }, "documentation":"

A structure containing information about an undetected signature on a page where it was expected but not found.

" }, "UndetectedSignatureList":{ "type":"list", "member":{"shape":"UndetectedSignature"} }, "UnsupportedDocumentException":{ "type":"structure", "members":{ }, "documentation":"

The format of the input document isn't supported. Documents for operations can be in PNG, JPEG, PDF, or TIFF format.

", "exception":true }, "ValueType":{ "type":"string", "enum":["DATE"] }, "Warning":{ "type":"structure", "members":{ "ErrorCode":{ "shape":"ErrorCode", "documentation":"

The error code for the warning.

" }, "Pages":{ "shape":"Pages", "documentation":"

A list of the pages that the warning applies to.

" } }, "documentation":"

A warning about an issue that occurred during asynchronous text analysis (StartDocumentAnalysis) or asynchronous document text detection (StartDocumentTextDetection).

" }, "Warnings":{ "type":"list", "member":{"shape":"Warning"} } }, "documentation":"

Amazon Textract detects and analyzes text in documents and converts it into machine-readable text. This is the API reference documentation for Amazon Textract.

" }