python-botocore/botocore/data/transfer/2018-11-05/service-2.json
2022-12-12 08:14:19 -08:00

4877 lines
250 KiB
JSON
Raw Permalink Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

{
"version":"2.0",
"metadata":{
"apiVersion":"2018-11-05",
"endpointPrefix":"transfer",
"jsonVersion":"1.1",
"protocol":"json",
"serviceAbbreviation":"AWS Transfer",
"serviceFullName":"AWS Transfer Family",
"serviceId":"Transfer",
"signatureVersion":"v4",
"signingName":"transfer",
"targetPrefix":"TransferService",
"uid":"transfer-2018-11-05"
},
"operations":{
"CreateAccess":{
"name":"CreateAccess",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"CreateAccessRequest"},
"output":{"shape":"CreateAccessResponse"},
"errors":[
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceExistsException"},
{"shape":"ResourceNotFoundException"}
],
"documentation":"<p>Used by administrators to choose which groups in the directory should have access to upload and download files over the enabled protocols using Transfer Family. For example, a Microsoft Active Directory might contain 50,000 users, but only a small fraction might need the ability to transfer files to the server. An administrator can use <code>CreateAccess</code> to limit the access to the correct set of users who need this ability.</p>"
},
"CreateAgreement":{
"name":"CreateAgreement",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"CreateAgreementRequest"},
"output":{"shape":"CreateAgreementResponse"},
"errors":[
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceExistsException"},
{"shape":"ResourceNotFoundException"},
{"shape":"ThrottlingException"}
],
"documentation":"<p>Creates an agreement. An agreement is a bilateral trading partner agreement, or partnership, between an Transfer Family server and an AS2 process. The agreement defines the file and message transfer relationship between the server and the AS2 process. To define an agreement, Transfer Family combines a server, local profile, partner profile, certificate, and other attributes.</p> <p>The partner is identified with the <code>PartnerProfileId</code>, and the AS2 process is identified with the <code>LocalProfileId</code>.</p>"
},
"CreateConnector":{
"name":"CreateConnector",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"CreateConnectorRequest"},
"output":{"shape":"CreateConnectorResponse"},
"errors":[
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceExistsException"},
{"shape":"ResourceNotFoundException"},
{"shape":"ThrottlingException"}
],
"documentation":"<p>Creates the connector, which captures the parameters for an outbound connection for the AS2 protocol. The connector is required for sending files to an externally hosted AS2 server. For more details about connectors, see <a href=\"https://docs.aws.amazon.com/transfer/latest/userguide/create-b2b-server.html#configure-as2-connector\">Create AS2 connectors</a>.</p>"
},
"CreateProfile":{
"name":"CreateProfile",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"CreateProfileRequest"},
"output":{"shape":"CreateProfileResponse"},
"errors":[
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceNotFoundException"},
{"shape":"ThrottlingException"}
],
"documentation":"<p>Creates the local or partner profile to use for AS2 transfers.</p>"
},
"CreateServer":{
"name":"CreateServer",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"CreateServerRequest"},
"output":{"shape":"CreateServerResponse"},
"errors":[
{"shape":"AccessDeniedException"},
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceExistsException"},
{"shape":"ResourceNotFoundException"},
{"shape":"ThrottlingException"}
],
"documentation":"<p>Instantiates an auto-scaling virtual server based on the selected file transfer protocol in Amazon Web Services. When you make updates to your file transfer protocol-enabled server or when you work with users, use the service-generated <code>ServerId</code> property that is assigned to the newly created server.</p>"
},
"CreateUser":{
"name":"CreateUser",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"CreateUserRequest"},
"output":{"shape":"CreateUserResponse"},
"errors":[
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceExistsException"},
{"shape":"ResourceNotFoundException"}
],
"documentation":"<p>Creates a user and associates them with an existing file transfer protocol-enabled server. You can only create and associate users with servers that have the <code>IdentityProviderType</code> set to <code>SERVICE_MANAGED</code>. Using parameters for <code>CreateUser</code>, you can specify the user name, set the home directory, store the user's public key, and assign the user's Identity and Access Management (IAM) role. You can also optionally add a session policy, and assign metadata with tags that can be used to group and search for users.</p>"
},
"CreateWorkflow":{
"name":"CreateWorkflow",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"CreateWorkflowRequest"},
"output":{"shape":"CreateWorkflowResponse"},
"errors":[
{"shape":"AccessDeniedException"},
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceExistsException"},
{"shape":"ThrottlingException"}
],
"documentation":"<p> Allows you to create a workflow with specified steps and step details the workflow invokes after file transfer completes. After creating a workflow, you can associate the workflow created with any transfer servers by specifying the <code>workflow-details</code> field in <code>CreateServer</code> and <code>UpdateServer</code> operations. </p>"
},
"DeleteAccess":{
"name":"DeleteAccess",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"DeleteAccessRequest"},
"errors":[
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceNotFoundException"}
],
"documentation":"<p>Allows you to delete the access specified in the <code>ServerID</code> and <code>ExternalID</code> parameters.</p>"
},
"DeleteAgreement":{
"name":"DeleteAgreement",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"DeleteAgreementRequest"},
"errors":[
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceNotFoundException"}
],
"documentation":"<p>Delete the agreement that's specified in the provided <code>AgreementId</code>.</p>"
},
"DeleteCertificate":{
"name":"DeleteCertificate",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"DeleteCertificateRequest"},
"errors":[
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceNotFoundException"}
],
"documentation":"<p>Deletes the certificate that's specified in the <code>CertificateId</code> parameter.</p>"
},
"DeleteConnector":{
"name":"DeleteConnector",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"DeleteConnectorRequest"},
"errors":[
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceNotFoundException"}
],
"documentation":"<p>Deletes the agreement that's specified in the provided <code>ConnectorId</code>.</p>"
},
"DeleteHostKey":{
"name":"DeleteHostKey",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"DeleteHostKeyRequest"},
"errors":[
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceNotFoundException"},
{"shape":"ThrottlingException"}
],
"documentation":"<p>Deletes the host key that's specified in the <code>HoskKeyId</code> parameter.</p>"
},
"DeleteProfile":{
"name":"DeleteProfile",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"DeleteProfileRequest"},
"errors":[
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceNotFoundException"}
],
"documentation":"<p>Deletes the profile that's specified in the <code>ProfileId</code> parameter.</p>"
},
"DeleteServer":{
"name":"DeleteServer",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"DeleteServerRequest"},
"errors":[
{"shape":"AccessDeniedException"},
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceNotFoundException"}
],
"documentation":"<p>Deletes the file transfer protocol-enabled server that you specify.</p> <p>No response returns from this operation.</p>"
},
"DeleteSshPublicKey":{
"name":"DeleteSshPublicKey",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"DeleteSshPublicKeyRequest"},
"errors":[
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceNotFoundException"},
{"shape":"ThrottlingException"}
],
"documentation":"<p>Deletes a user's Secure Shell (SSH) public key.</p>"
},
"DeleteUser":{
"name":"DeleteUser",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"DeleteUserRequest"},
"errors":[
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceNotFoundException"}
],
"documentation":"<p>Deletes the user belonging to a file transfer protocol-enabled server you specify.</p> <p>No response returns from this operation.</p> <note> <p>When you delete a user from a server, the user's information is lost.</p> </note>"
},
"DeleteWorkflow":{
"name":"DeleteWorkflow",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"DeleteWorkflowRequest"},
"errors":[
{"shape":"AccessDeniedException"},
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceNotFoundException"}
],
"documentation":"<p>Deletes the specified workflow.</p>"
},
"DescribeAccess":{
"name":"DescribeAccess",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"DescribeAccessRequest"},
"output":{"shape":"DescribeAccessResponse"},
"errors":[
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceNotFoundException"}
],
"documentation":"<p>Describes the access that is assigned to the specific file transfer protocol-enabled server, as identified by its <code>ServerId</code> property and its <code>ExternalId</code>.</p> <p>The response from this call returns the properties of the access that is associated with the <code>ServerId</code> value that was specified.</p>"
},
"DescribeAgreement":{
"name":"DescribeAgreement",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"DescribeAgreementRequest"},
"output":{"shape":"DescribeAgreementResponse"},
"errors":[
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceNotFoundException"}
],
"documentation":"<p>Describes the agreement that's identified by the <code>AgreementId</code>.</p>"
},
"DescribeCertificate":{
"name":"DescribeCertificate",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"DescribeCertificateRequest"},
"output":{"shape":"DescribeCertificateResponse"},
"errors":[
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceNotFoundException"}
],
"documentation":"<p>Describes the certificate that's identified by the <code>CertificateId</code>.</p>"
},
"DescribeConnector":{
"name":"DescribeConnector",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"DescribeConnectorRequest"},
"output":{"shape":"DescribeConnectorResponse"},
"errors":[
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceNotFoundException"}
],
"documentation":"<p>Describes the connector that's identified by the <code>ConnectorId.</code> </p>"
},
"DescribeExecution":{
"name":"DescribeExecution",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"DescribeExecutionRequest"},
"output":{"shape":"DescribeExecutionResponse"},
"errors":[
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceNotFoundException"}
],
"documentation":"<p>You can use <code>DescribeExecution</code> to check the details of the execution of the specified workflow.</p>"
},
"DescribeHostKey":{
"name":"DescribeHostKey",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"DescribeHostKeyRequest"},
"output":{"shape":"DescribeHostKeyResponse"},
"errors":[
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceNotFoundException"}
],
"documentation":"<p>Returns the details of the host key that's specified by the <code>HostKeyId</code> and <code>ServerId</code>.</p>"
},
"DescribeProfile":{
"name":"DescribeProfile",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"DescribeProfileRequest"},
"output":{"shape":"DescribeProfileResponse"},
"errors":[
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceNotFoundException"}
],
"documentation":"<p>Returns the details of the profile that's specified by the <code>ProfileId</code>.</p>"
},
"DescribeSecurityPolicy":{
"name":"DescribeSecurityPolicy",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"DescribeSecurityPolicyRequest"},
"output":{"shape":"DescribeSecurityPolicyResponse"},
"errors":[
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceNotFoundException"}
],
"documentation":"<p>Describes the security policy that is attached to your file transfer protocol-enabled server. The response contains a description of the security policy's properties. For more information about security policies, see <a href=\"https://docs.aws.amazon.com/transfer/latest/userguide/security-policies.html\">Working with security policies</a>.</p>"
},
"DescribeServer":{
"name":"DescribeServer",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"DescribeServerRequest"},
"output":{"shape":"DescribeServerResponse"},
"errors":[
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceNotFoundException"}
],
"documentation":"<p>Describes a file transfer protocol-enabled server that you specify by passing the <code>ServerId</code> parameter.</p> <p>The response contains a description of a server's properties. When you set <code>EndpointType</code> to VPC, the response will contain the <code>EndpointDetails</code>.</p>"
},
"DescribeUser":{
"name":"DescribeUser",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"DescribeUserRequest"},
"output":{"shape":"DescribeUserResponse"},
"errors":[
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceNotFoundException"}
],
"documentation":"<p>Describes the user assigned to the specific file transfer protocol-enabled server, as identified by its <code>ServerId</code> property.</p> <p>The response from this call returns the properties of the user associated with the <code>ServerId</code> value that was specified.</p>"
},
"DescribeWorkflow":{
"name":"DescribeWorkflow",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"DescribeWorkflowRequest"},
"output":{"shape":"DescribeWorkflowResponse"},
"errors":[
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceNotFoundException"}
],
"documentation":"<p>Describes the specified workflow.</p>"
},
"ImportCertificate":{
"name":"ImportCertificate",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"ImportCertificateRequest"},
"output":{"shape":"ImportCertificateResponse"},
"errors":[
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceNotFoundException"}
],
"documentation":"<p>Imports the signing and encryption certificates that you need to create local (AS2) profiles and partner profiles.</p>"
},
"ImportHostKey":{
"name":"ImportHostKey",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"ImportHostKeyRequest"},
"output":{"shape":"ImportHostKeyResponse"},
"errors":[
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceExistsException"},
{"shape":"ResourceNotFoundException"},
{"shape":"ThrottlingException"}
],
"documentation":"<p>Adds a host key to the server that's specified by the <code>ServerId</code> parameter.</p>"
},
"ImportSshPublicKey":{
"name":"ImportSshPublicKey",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"ImportSshPublicKeyRequest"},
"output":{"shape":"ImportSshPublicKeyResponse"},
"errors":[
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceExistsException"},
{"shape":"ResourceNotFoundException"},
{"shape":"ThrottlingException"}
],
"documentation":"<p>Adds a Secure Shell (SSH) public key to a user account identified by a <code>UserName</code> value assigned to the specific file transfer protocol-enabled server, identified by <code>ServerId</code>.</p> <p>The response returns the <code>UserName</code> value, the <code>ServerId</code> value, and the name of the <code>SshPublicKeyId</code>.</p>"
},
"ListAccesses":{
"name":"ListAccesses",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"ListAccessesRequest"},
"output":{"shape":"ListAccessesResponse"},
"errors":[
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidNextTokenException"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceNotFoundException"}
],
"documentation":"<p>Lists the details for all the accesses you have on your server.</p>"
},
"ListAgreements":{
"name":"ListAgreements",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"ListAgreementsRequest"},
"output":{"shape":"ListAgreementsResponse"},
"errors":[
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidNextTokenException"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceNotFoundException"}
],
"documentation":"<p>Returns a list of the agreements for the server that's identified by the <code>ServerId</code> that you supply. If you want to limit the results to a certain number, supply a value for the <code>MaxResults</code> parameter. If you ran the command previously and received a value for <code>NextToken</code>, you can supply that value to continue listing agreements from where you left off.</p>"
},
"ListCertificates":{
"name":"ListCertificates",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"ListCertificatesRequest"},
"output":{"shape":"ListCertificatesResponse"},
"errors":[
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidNextTokenException"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceNotFoundException"}
],
"documentation":"<p>Returns a list of the current certificates that have been imported into Transfer Family. If you want to limit the results to a certain number, supply a value for the <code>MaxResults</code> parameter. If you ran the command previously and received a value for the <code>NextToken</code> parameter, you can supply that value to continue listing certificates from where you left off.</p>"
},
"ListConnectors":{
"name":"ListConnectors",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"ListConnectorsRequest"},
"output":{"shape":"ListConnectorsResponse"},
"errors":[
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidNextTokenException"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceNotFoundException"}
],
"documentation":"<p>Lists the connectors for the specified Region.</p>"
},
"ListExecutions":{
"name":"ListExecutions",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"ListExecutionsRequest"},
"output":{"shape":"ListExecutionsResponse"},
"errors":[
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidNextTokenException"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceNotFoundException"}
],
"documentation":"<p>Lists all executions for the specified workflow.</p>"
},
"ListHostKeys":{
"name":"ListHostKeys",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"ListHostKeysRequest"},
"output":{"shape":"ListHostKeysResponse"},
"errors":[
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidNextTokenException"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceNotFoundException"}
],
"documentation":"<p>Returns a list of host keys for the server that's specified by the <code>ServerId</code> parameter.</p>"
},
"ListProfiles":{
"name":"ListProfiles",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"ListProfilesRequest"},
"output":{"shape":"ListProfilesResponse"},
"errors":[
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidNextTokenException"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceNotFoundException"}
],
"documentation":"<p>Returns a list of the profiles for your system. If you want to limit the results to a certain number, supply a value for the <code>MaxResults</code> parameter. If you ran the command previously and received a value for <code>NextToken</code>, you can supply that value to continue listing profiles from where you left off.</p>"
},
"ListSecurityPolicies":{
"name":"ListSecurityPolicies",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"ListSecurityPoliciesRequest"},
"output":{"shape":"ListSecurityPoliciesResponse"},
"errors":[
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidNextTokenException"},
{"shape":"InvalidRequestException"}
],
"documentation":"<p>Lists the security policies that are attached to your file transfer protocol-enabled servers.</p>"
},
"ListServers":{
"name":"ListServers",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"ListServersRequest"},
"output":{"shape":"ListServersResponse"},
"errors":[
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidNextTokenException"},
{"shape":"InvalidRequestException"}
],
"documentation":"<p>Lists the file transfer protocol-enabled servers that are associated with your Amazon Web Services account.</p>"
},
"ListTagsForResource":{
"name":"ListTagsForResource",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"ListTagsForResourceRequest"},
"output":{"shape":"ListTagsForResourceResponse"},
"errors":[
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidNextTokenException"},
{"shape":"InvalidRequestException"}
],
"documentation":"<p>Lists all of the tags associated with the Amazon Resource Name (ARN) that you specify. The resource can be a user, server, or role.</p>"
},
"ListUsers":{
"name":"ListUsers",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"ListUsersRequest"},
"output":{"shape":"ListUsersResponse"},
"errors":[
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidNextTokenException"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceNotFoundException"}
],
"documentation":"<p>Lists the users for a file transfer protocol-enabled server that you specify by passing the <code>ServerId</code> parameter.</p>"
},
"ListWorkflows":{
"name":"ListWorkflows",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"ListWorkflowsRequest"},
"output":{"shape":"ListWorkflowsResponse"},
"errors":[
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidNextTokenException"},
{"shape":"InvalidRequestException"}
],
"documentation":"<p>Lists all of your workflows.</p>"
},
"SendWorkflowStepState":{
"name":"SendWorkflowStepState",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"SendWorkflowStepStateRequest"},
"output":{"shape":"SendWorkflowStepStateResponse"},
"errors":[
{"shape":"AccessDeniedException"},
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceNotFoundException"},
{"shape":"ThrottlingException"}
],
"documentation":"<p>Sends a callback for asynchronous custom steps.</p> <p> The <code>ExecutionId</code>, <code>WorkflowId</code>, and <code>Token</code> are passed to the target resource during execution of a custom step of a workflow. You must include those with their callback as well as providing a status. </p>"
},
"StartFileTransfer":{
"name":"StartFileTransfer",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"StartFileTransferRequest"},
"output":{"shape":"StartFileTransferResponse"},
"errors":[
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceNotFoundException"},
{"shape":"ThrottlingException"}
],
"documentation":"<p>Begins an outbound file transfer to a remote AS2 server. You specify the <code>ConnectorId</code> and the file paths for where to send the files. </p>"
},
"StartServer":{
"name":"StartServer",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"StartServerRequest"},
"errors":[
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceNotFoundException"},
{"shape":"ThrottlingException"}
],
"documentation":"<p>Changes the state of a file transfer protocol-enabled server from <code>OFFLINE</code> to <code>ONLINE</code>. It has no impact on a server that is already <code>ONLINE</code>. An <code>ONLINE</code> server can accept and process file transfer jobs.</p> <p>The state of <code>STARTING</code> indicates that the server is in an intermediate state, either not fully able to respond, or not fully online. The values of <code>START_FAILED</code> can indicate an error condition.</p> <p>No response is returned from this call.</p>"
},
"StopServer":{
"name":"StopServer",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"StopServerRequest"},
"errors":[
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceNotFoundException"},
{"shape":"ThrottlingException"}
],
"documentation":"<p>Changes the state of a file transfer protocol-enabled server from <code>ONLINE</code> to <code>OFFLINE</code>. An <code>OFFLINE</code> server cannot accept and process file transfer jobs. Information tied to your server, such as server and user properties, are not affected by stopping your server.</p> <note> <p>Stopping the server does not reduce or impact your file transfer protocol endpoint billing; you must delete the server to stop being billed.</p> </note> <p>The state of <code>STOPPING</code> indicates that the server is in an intermediate state, either not fully able to respond, or not fully offline. The values of <code>STOP_FAILED</code> can indicate an error condition.</p> <p>No response is returned from this call.</p>"
},
"TagResource":{
"name":"TagResource",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"TagResourceRequest"},
"errors":[
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceNotFoundException"}
],
"documentation":"<p>Attaches a key-value pair to a resource, as identified by its Amazon Resource Name (ARN). Resources are users, servers, roles, and other entities.</p> <p>There is no response returned from this call.</p>"
},
"TestIdentityProvider":{
"name":"TestIdentityProvider",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"TestIdentityProviderRequest"},
"output":{"shape":"TestIdentityProviderResponse"},
"errors":[
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceNotFoundException"}
],
"documentation":"<p>If the <code>IdentityProviderType</code> of a file transfer protocol-enabled server is <code>AWS_DIRECTORY_SERVICE</code> or <code>API_Gateway</code>, tests whether your identity provider is set up successfully. We highly recommend that you call this operation to test your authentication method as soon as you create your server. By doing so, you can troubleshoot issues with the identity provider integration to ensure that your users can successfully use the service.</p> <p> The <code>ServerId</code> and <code>UserName</code> parameters are required. The <code>ServerProtocol</code>, <code>SourceIp</code>, and <code>UserPassword</code> are all optional. </p> <note> <p> You cannot use <code>TestIdentityProvider</code> if the <code>IdentityProviderType</code> of your server is <code>SERVICE_MANAGED</code>. </p> </note> <ul> <li> <p> If you provide any incorrect values for any parameters, the <code>Response</code> field is empty. </p> </li> <li> <p> If you provide a server ID for a server that uses service-managed users, you get an error: </p> <p> <code> An error occurred (InvalidRequestException) when calling the TestIdentityProvider operation: s-<i>server-ID</i> not configured for external auth </code> </p> </li> <li> <p> If you enter a Server ID for the <code>--server-id</code> parameter that does not identify an actual Transfer server, you receive the following error: </p> <p> <code>An error occurred (ResourceNotFoundException) when calling the TestIdentityProvider operation: Unknown server</code> </p> </li> </ul>"
},
"UntagResource":{
"name":"UntagResource",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"UntagResourceRequest"},
"errors":[
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceNotFoundException"}
],
"documentation":"<p>Detaches a key-value pair from a resource, as identified by its Amazon Resource Name (ARN). Resources are users, servers, roles, and other entities.</p> <p>No response is returned from this call.</p>"
},
"UpdateAccess":{
"name":"UpdateAccess",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"UpdateAccessRequest"},
"output":{"shape":"UpdateAccessResponse"},
"errors":[
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceExistsException"},
{"shape":"ResourceNotFoundException"},
{"shape":"ThrottlingException"}
],
"documentation":"<p>Allows you to update parameters for the access specified in the <code>ServerID</code> and <code>ExternalID</code> parameters.</p>"
},
"UpdateAgreement":{
"name":"UpdateAgreement",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"UpdateAgreementRequest"},
"output":{"shape":"UpdateAgreementResponse"},
"errors":[
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceExistsException"},
{"shape":"ResourceNotFoundException"},
{"shape":"ThrottlingException"}
],
"documentation":"<p>Updates some of the parameters for an existing agreement. Provide the <code>AgreementId</code> and the <code>ServerId</code> for the agreement that you want to update, along with the new values for the parameters to update.</p>"
},
"UpdateCertificate":{
"name":"UpdateCertificate",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"UpdateCertificateRequest"},
"output":{"shape":"UpdateCertificateResponse"},
"errors":[
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceNotFoundException"},
{"shape":"ThrottlingException"}
],
"documentation":"<p>Updates the active and inactive dates for a certificate.</p>"
},
"UpdateConnector":{
"name":"UpdateConnector",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"UpdateConnectorRequest"},
"output":{"shape":"UpdateConnectorResponse"},
"errors":[
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceExistsException"},
{"shape":"ResourceNotFoundException"},
{"shape":"ThrottlingException"}
],
"documentation":"<p>Updates some of the parameters for an existing connector. Provide the <code>ConnectorId</code> for the connector that you want to update, along with the new values for the parameters to update.</p>"
},
"UpdateHostKey":{
"name":"UpdateHostKey",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"UpdateHostKeyRequest"},
"output":{"shape":"UpdateHostKeyResponse"},
"errors":[
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceNotFoundException"},
{"shape":"ThrottlingException"}
],
"documentation":"<p>Updates the description for the host key that's specified by the <code>ServerId</code> and <code>HostKeyId</code> parameters.</p>"
},
"UpdateProfile":{
"name":"UpdateProfile",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"UpdateProfileRequest"},
"output":{"shape":"UpdateProfileResponse"},
"errors":[
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceNotFoundException"},
{"shape":"ThrottlingException"}
],
"documentation":"<p>Updates some of the parameters for an existing profile. Provide the <code>ProfileId</code> for the profile that you want to update, along with the new values for the parameters to update.</p>"
},
"UpdateServer":{
"name":"UpdateServer",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"UpdateServerRequest"},
"output":{"shape":"UpdateServerResponse"},
"errors":[
{"shape":"AccessDeniedException"},
{"shape":"ServiceUnavailableException"},
{"shape":"ConflictException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceExistsException"},
{"shape":"ResourceNotFoundException"},
{"shape":"ThrottlingException"}
],
"documentation":"<p>Updates the file transfer protocol-enabled server's properties after that server has been created.</p> <p>The <code>UpdateServer</code> call returns the <code>ServerId</code> of the server you updated.</p>"
},
"UpdateUser":{
"name":"UpdateUser",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"UpdateUserRequest"},
"output":{"shape":"UpdateUserResponse"},
"errors":[
{"shape":"ServiceUnavailableException"},
{"shape":"InternalServiceError"},
{"shape":"InvalidRequestException"},
{"shape":"ResourceNotFoundException"},
{"shape":"ThrottlingException"}
],
"documentation":"<p>Assigns new properties to a user. Parameters you pass modify any or all of the following: the home directory, role, and policy for the <code>UserName</code> and <code>ServerId</code> you specify.</p> <p>The response returns the <code>ServerId</code> and the <code>UserName</code> for the updated user.</p>"
}
},
"shapes":{
"AccessDeniedException":{
"type":"structure",
"members":{
"Message":{"shape":"ServiceErrorMessage"}
},
"documentation":"<p>You do not have sufficient access to perform this action.</p>",
"exception":true,
"synthetic":true
},
"AddressAllocationId":{"type":"string"},
"AddressAllocationIds":{
"type":"list",
"member":{"shape":"AddressAllocationId"}
},
"AgreementId":{
"type":"string",
"max":19,
"min":19,
"pattern":"^a-([0-9a-f]{17})$"
},
"AgreementStatusType":{
"type":"string",
"enum":[
"ACTIVE",
"INACTIVE"
]
},
"Arn":{
"type":"string",
"max":1600,
"min":20,
"pattern":"arn:.*"
},
"As2ConnectorConfig":{
"type":"structure",
"members":{
"LocalProfileId":{
"shape":"ProfileId",
"documentation":"<p>A unique identifier for the AS2 local profile.</p>"
},
"PartnerProfileId":{
"shape":"ProfileId",
"documentation":"<p>A unique identifier for the partner profile for the connector.</p>"
},
"MessageSubject":{
"shape":"MessageSubject",
"documentation":"<p>Used as the <code>Subject</code> HTTP header attribute in AS2 messages that are being sent with the connector.</p>"
},
"Compression":{
"shape":"CompressionEnum",
"documentation":"<p>Specifies whether the AS2 file is compressed.</p>"
},
"EncryptionAlgorithm":{
"shape":"EncryptionAlg",
"documentation":"<p>The algorithm that is used to encrypt the file.</p> <note> <p>You can only specify <code>NONE</code> if the URL for your connector uses HTTPS. This ensures that no traffic is sent in clear text.</p> </note>"
},
"SigningAlgorithm":{
"shape":"SigningAlg",
"documentation":"<p>The algorithm that is used to sign the AS2 messages sent with the connector.</p>"
},
"MdnSigningAlgorithm":{
"shape":"MdnSigningAlg",
"documentation":"<p>The signing algorithm for the MDN response.</p> <note> <p>If set to DEFAULT (or not set at all), the value for <code>SigningAlgorithm</code> is used.</p> </note>"
},
"MdnResponse":{
"shape":"MdnResponse",
"documentation":"<p>Used for outbound requests (from an Transfer Family server to a partner AS2 server) to determine whether the partner response for transfers is synchronous or asynchronous. Specify either of the following values:</p> <ul> <li> <p> <code>SYNC</code>: The system expects a synchronous MDN response, confirming that the file was transferred successfully (or not).</p> </li> <li> <p> <code>NONE</code>: Specifies that no MDN response is required.</p> </li> </ul>"
}
},
"documentation":"<p>Contains the details for a connector object. The connector object is used for AS2 outbound processes, to connect the Transfer Family customer with the trading partner.</p>"
},
"As2Id":{
"type":"string",
"max":128,
"min":1,
"pattern":"^[\\p{Print}\\s]*"
},
"As2Transport":{
"type":"string",
"enum":["HTTP"]
},
"As2Transports":{
"type":"list",
"member":{"shape":"As2Transport"},
"max":1,
"min":1
},
"CallbackToken":{
"type":"string",
"max":64,
"min":1,
"pattern":"\\w+"
},
"CertDate":{"type":"timestamp"},
"CertSerial":{
"type":"string",
"max":48,
"min":0,
"pattern":"^[\\p{XDigit}{2}:?]*"
},
"Certificate":{
"type":"string",
"max":1600
},
"CertificateBodyType":{
"type":"string",
"max":16384,
"min":1,
"pattern":"^[\\u0009\\u000A\\u000D\\u0020-\\u00FF]*",
"sensitive":true
},
"CertificateChainType":{
"type":"string",
"max":2097152,
"min":1,
"pattern":"^[\\u0009\\u000A\\u000D\\u0020-\\u00FF]*",
"sensitive":true
},
"CertificateId":{
"type":"string",
"max":22,
"min":22,
"pattern":"^cert-([0-9a-f]{17})$"
},
"CertificateIds":{
"type":"list",
"member":{"shape":"CertificateId"}
},
"CertificateStatusType":{
"type":"string",
"enum":[
"ACTIVE",
"PENDING_ROTATION",
"INACTIVE"
]
},
"CertificateType":{
"type":"string",
"enum":[
"CERTIFICATE",
"CERTIFICATE_WITH_PRIVATE_KEY"
]
},
"CertificateUsageType":{
"type":"string",
"enum":[
"SIGNING",
"ENCRYPTION"
]
},
"CompressionEnum":{
"type":"string",
"enum":[
"ZLIB",
"DISABLED"
]
},
"ConflictException":{
"type":"structure",
"required":["Message"],
"members":{
"Message":{"shape":"Message"}
},
"documentation":"<p>This exception is thrown when the <code>UpdateServer</code> is called for a file transfer protocol-enabled server that has VPC as the endpoint type and the server's <code>VpcEndpointID</code> is not in the available state.</p>",
"exception":true
},
"ConnectorId":{
"type":"string",
"max":19,
"min":19,
"pattern":"^c-([0-9a-f]{17})$"
},
"CopyStepDetails":{
"type":"structure",
"members":{
"Name":{
"shape":"WorkflowStepName",
"documentation":"<p>The name of the step, used as an identifier.</p>"
},
"DestinationFileLocation":{
"shape":"InputFileLocation",
"documentation":"<p>Specifies the location for the file being copied. Only applicable for Copy type workflow steps. Use <code>${Transfer:username}</code> in this field to parametrize the destination prefix by username.</p>"
},
"OverwriteExisting":{
"shape":"OverwriteExisting",
"documentation":"<p>A flag that indicates whether or not to overwrite an existing file of the same name. The default is <code>FALSE</code>.</p>"
},
"SourceFileLocation":{
"shape":"SourceFileLocation",
"documentation":"<p>Specifies which file to use as input to the workflow step: either the output from the previous step, or the originally uploaded file for the workflow.</p> <ul> <li> <p>Enter <code>${previous.file}</code> to use the previous file as the input. In this case, this workflow step uses the output file from the previous workflow step as input. This is the default value.</p> </li> <li> <p>Enter <code>${original.file}</code> to use the originally-uploaded file location as input for this step.</p> </li> </ul>"
}
},
"documentation":"<p>Each step type has its own <code>StepDetails</code> structure.</p>"
},
"CreateAccessRequest":{
"type":"structure",
"required":[
"Role",
"ServerId",
"ExternalId"
],
"members":{
"HomeDirectory":{
"shape":"HomeDirectory",
"documentation":"<p>The landing directory (folder) for a user when they log in to the server using the client.</p> <p>A <code>HomeDirectory</code> example is <code>/bucket_name/home/mydirectory</code>.</p>"
},
"HomeDirectoryType":{
"shape":"HomeDirectoryType",
"documentation":"<p>The type of landing directory (folder) that you want your users' home directory to be when they log in to the server. If you set it to <code>PATH</code>, the user will see the absolute Amazon S3 bucket or EFS paths as is in their file transfer protocol clients. If you set it <code>LOGICAL</code>, you need to provide mappings in the <code>HomeDirectoryMappings</code> for how you want to make Amazon S3 or Amazon EFS paths visible to your users.</p>"
},
"HomeDirectoryMappings":{
"shape":"HomeDirectoryMappings",
"documentation":"<p>Logical directory mappings that specify what Amazon S3 or Amazon EFS paths and keys should be visible to your user and how you want to make them visible. You must specify the <code>Entry</code> and <code>Target</code> pair, where <code>Entry</code> shows how the path is made visible and <code>Target</code> is the actual Amazon S3 or Amazon EFS path. If you only specify a target, it is displayed as is. You also must ensure that your Identity and Access Management (IAM) role provides access to paths in <code>Target</code>. This value can be set only when <code>HomeDirectoryType</code> is set to <i>LOGICAL</i>.</p> <p>The following is an <code>Entry</code> and <code>Target</code> pair example.</p> <p> <code>[ { \"Entry\": \"/directory1\", \"Target\": \"/bucket_name/home/mydirectory\" } ]</code> </p> <p>In most cases, you can use this value instead of the session policy to lock down your user to the designated home directory (\"<code>chroot</code>\"). To do this, you can set <code>Entry</code> to <code>/</code> and set <code>Target</code> to the <code>HomeDirectory</code> parameter value.</p> <p>The following is an <code>Entry</code> and <code>Target</code> pair example for <code>chroot</code>.</p> <p> <code>[ { \"Entry\": \"/\", \"Target\": \"/bucket_name/home/mydirectory\" } ]</code> </p>"
},
"Policy":{
"shape":"Policy",
"documentation":"<p>A session policy for your user so that you can use the same Identity and Access Management (IAM) role across multiple users. This policy scopes down a user's access to portions of their Amazon S3 bucket. Variables that you can use inside this policy include <code>${Transfer:UserName}</code>, <code>${Transfer:HomeDirectory}</code>, and <code>${Transfer:HomeBucket}</code>.</p> <note> <p>This policy applies only when the domain of <code>ServerId</code> is Amazon S3. Amazon EFS does not use session policies.</p> <p>For session policies, Transfer Family stores the policy as a JSON blob, instead of the Amazon Resource Name (ARN) of the policy. You save the policy as a JSON blob and pass it in the <code>Policy</code> argument.</p> <p>For an example of a session policy, see <a href=\"https://docs.aws.amazon.com/transfer/latest/userguide/session-policy.html\">Example session policy</a>.</p> <p>For more information, see <a href=\"https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html\">AssumeRole</a> in the <i>Security Token Service API Reference</i>.</p> </note>"
},
"PosixProfile":{"shape":"PosixProfile"},
"Role":{
"shape":"Role",
"documentation":"<p>The Amazon Resource Name (ARN) of the Identity and Access Management (IAM) role that controls your users' access to your Amazon S3 bucket or Amazon EFS file system. The policies attached to this role determine the level of access that you want to provide your users when transferring files into and out of your Amazon S3 bucket or Amazon EFS file system. The IAM role should also contain a trust relationship that allows the server to access your resources when servicing your users' transfer requests.</p>"
},
"ServerId":{
"shape":"ServerId",
"documentation":"<p>A system-assigned unique identifier for a server instance. This is the specific server that you added your user to.</p>"
},
"ExternalId":{
"shape":"ExternalId",
"documentation":"<p>A unique identifier that is required to identify specific groups within your directory. The users of the group that you associate have access to your Amazon S3 or Amazon EFS resources over the enabled protocols using Transfer Family. If you know the group name, you can view the SID values by running the following command using Windows PowerShell.</p> <p> <code>Get-ADGroup -Filter {samAccountName -like \"<i>YourGroupName</i>*\"} -Properties * | Select SamAccountName,ObjectSid</code> </p> <p>In that command, replace <i>YourGroupName</i> with the name of your Active Directory group.</p> <p>The regular expression used to validate this parameter is a string of characters consisting of uppercase and lowercase alphanumeric characters with no spaces. You can also include underscores or any of the following characters: =,.@:/-</p>"
}
}
},
"CreateAccessResponse":{
"type":"structure",
"required":[
"ServerId",
"ExternalId"
],
"members":{
"ServerId":{
"shape":"ServerId",
"documentation":"<p>The identifier of the server that the user is attached to.</p>"
},
"ExternalId":{
"shape":"ExternalId",
"documentation":"<p>The external identifier of the group whose users have access to your Amazon S3 or Amazon EFS resources over the enabled protocols using Transfer Family.</p>"
}
}
},
"CreateAgreementRequest":{
"type":"structure",
"required":[
"ServerId",
"LocalProfileId",
"PartnerProfileId",
"BaseDirectory",
"AccessRole"
],
"members":{
"Description":{
"shape":"Description",
"documentation":"<p>A name or short description to identify the agreement. </p>"
},
"ServerId":{
"shape":"ServerId",
"documentation":"<p>A system-assigned unique identifier for a server instance. This is the specific server that the agreement uses.</p>"
},
"LocalProfileId":{
"shape":"ProfileId",
"documentation":"<p>A unique identifier for the AS2 local profile.</p>"
},
"PartnerProfileId":{
"shape":"ProfileId",
"documentation":"<p>A unique identifier for the partner profile used in the agreement.</p>"
},
"BaseDirectory":{
"shape":"HomeDirectory",
"documentation":"<p>The landing directory (folder) for files transferred by using the AS2 protocol.</p> <p>A <code>BaseDirectory</code> example is <i>DOC-EXAMPLE-BUCKET</i>/<i>home</i>/<i>mydirectory</i>.</p>"
},
"AccessRole":{
"shape":"Role",
"documentation":"<p>With AS2, you can send files by calling <code>StartFileTransfer</code> and specifying the file paths in the request parameter, <code>SendFilePaths</code>. We use the files parent directory (for example, for <code>--send-file-paths /bucket/dir/file.txt</code>, parent directory is <code>/bucket/dir/</code>) to temporarily store a processed AS2 message file, store the MDN when we receive them from the partner, and write a final JSON file containing relevant metadata of the transmission. So, the <code>AccessRole</code> needs to provide read and write access to the parent directory of the file location used in the <code>StartFileTransfer</code> request. Additionally, you need to provide read and write access to the parent directory of the files that you intend to send with <code>StartFileTransfer</code>.</p>"
},
"Status":{
"shape":"AgreementStatusType",
"documentation":"<p>The status of the agreement. The agreement can be either <code>ACTIVE</code> or <code>INACTIVE</code>.</p>"
},
"Tags":{
"shape":"Tags",
"documentation":"<p>Key-value pairs that can be used to group and search for agreements.</p>"
}
}
},
"CreateAgreementResponse":{
"type":"structure",
"required":["AgreementId"],
"members":{
"AgreementId":{
"shape":"AgreementId",
"documentation":"<p>The unique identifier for the agreement. Use this ID for deleting, or updating an agreement, as well as in any other API calls that require that you specify the agreement ID.</p>"
}
}
},
"CreateConnectorRequest":{
"type":"structure",
"required":[
"Url",
"As2Config",
"AccessRole"
],
"members":{
"Url":{
"shape":"Url",
"documentation":"<p>The URL of the partner's AS2 endpoint.</p>"
},
"As2Config":{
"shape":"As2ConnectorConfig",
"documentation":"<p>A structure that contains the parameters for a connector object.</p>"
},
"AccessRole":{
"shape":"Role",
"documentation":"<p>With AS2, you can send files by calling <code>StartFileTransfer</code> and specifying the file paths in the request parameter, <code>SendFilePaths</code>. We use the files parent directory (for example, for <code>--send-file-paths /bucket/dir/file.txt</code>, parent directory is <code>/bucket/dir/</code>) to temporarily store a processed AS2 message file, store the MDN when we receive them from the partner, and write a final JSON file containing relevant metadata of the transmission. So, the <code>AccessRole</code> needs to provide read and write access to the parent directory of the file location used in the <code>StartFileTransfer</code> request. Additionally, you need to provide read and write access to the parent directory of the files that you intend to send with <code>StartFileTransfer</code>.</p>"
},
"LoggingRole":{
"shape":"Role",
"documentation":"<p>The Amazon Resource Name (ARN) of the Identity and Access Management (IAM) role that allows a connector to turn on CloudWatch logging for Amazon S3 events. When set, you can view connector activity in your CloudWatch logs.</p>"
},
"Tags":{
"shape":"Tags",
"documentation":"<p>Key-value pairs that can be used to group and search for connectors. Tags are metadata attached to connectors for any purpose.</p>"
}
}
},
"CreateConnectorResponse":{
"type":"structure",
"required":["ConnectorId"],
"members":{
"ConnectorId":{
"shape":"ConnectorId",
"documentation":"<p>The unique identifier for the connector, returned after the API call succeeds.</p>"
}
}
},
"CreateProfileRequest":{
"type":"structure",
"required":[
"As2Id",
"ProfileType"
],
"members":{
"As2Id":{
"shape":"As2Id",
"documentation":"<p>The <code>As2Id</code> is the <i>AS2-name</i>, as defined in the <a href=\"https://datatracker.ietf.org/doc/html/rfc4130\">RFC 4130</a>. For inbound transfers, this is the <code>AS2-From</code> header for the AS2 messages sent from the partner. For outbound connectors, this is the <code>AS2-To</code> header for the AS2 messages sent to the partner using the <code>StartFileTransfer</code> API operation. This ID cannot include spaces.</p>"
},
"ProfileType":{
"shape":"ProfileType",
"documentation":"<p>Determines the type of profile to create:</p> <ul> <li> <p>Specify <code>LOCAL</code> to create a local profile. A local profile represents the AS2-enabled Transfer Family server organization or party.</p> </li> <li> <p>Specify <code>PARTNER</code> to create a partner profile. A partner profile represents a remote organization, external to Transfer Family.</p> </li> </ul>"
},
"CertificateIds":{
"shape":"CertificateIds",
"documentation":"<p>An array of identifiers for the imported certificates. You use this identifier for working with profiles and partner profiles.</p>"
},
"Tags":{
"shape":"Tags",
"documentation":"<p>Key-value pairs that can be used to group and search for AS2 profiles.</p>"
}
}
},
"CreateProfileResponse":{
"type":"structure",
"required":["ProfileId"],
"members":{
"ProfileId":{
"shape":"ProfileId",
"documentation":"<p>The unique identifier for the AS2 profile, returned after the API call succeeds.</p>"
}
}
},
"CreateServerRequest":{
"type":"structure",
"members":{
"Certificate":{
"shape":"Certificate",
"documentation":"<p>The Amazon Resource Name (ARN) of the Certificate Manager (ACM) certificate. Required when <code>Protocols</code> is set to <code>FTPS</code>.</p> <p>To request a new public certificate, see <a href=\"https://docs.aws.amazon.com/acm/latest/userguide/gs-acm-request-public.html\">Request a public certificate</a> in the <i>Certificate Manager User Guide</i>.</p> <p>To import an existing certificate into ACM, see <a href=\"https://docs.aws.amazon.com/acm/latest/userguide/import-certificate.html\">Importing certificates into ACM</a> in the <i>Certificate Manager User Guide</i>.</p> <p>To request a private certificate to use FTPS through private IP addresses, see <a href=\"https://docs.aws.amazon.com/acm/latest/userguide/gs-acm-request-private.html\">Request a private certificate</a> in the <i>Certificate Manager User Guide</i>.</p> <p>Certificates with the following cryptographic algorithms and key sizes are supported:</p> <ul> <li> <p>2048-bit RSA (RSA_2048)</p> </li> <li> <p>4096-bit RSA (RSA_4096)</p> </li> <li> <p>Elliptic Prime Curve 256 bit (EC_prime256v1)</p> </li> <li> <p>Elliptic Prime Curve 384 bit (EC_secp384r1)</p> </li> <li> <p>Elliptic Prime Curve 521 bit (EC_secp521r1)</p> </li> </ul> <note> <p>The certificate must be a valid SSL/TLS X.509 version 3 certificate with FQDN or IP address specified and information about the issuer.</p> </note>"
},
"Domain":{
"shape":"Domain",
"documentation":"<p>The domain of the storage system that is used for file transfers. There are two domains available: Amazon Simple Storage Service (Amazon S3) and Amazon Elastic File System (Amazon EFS). The default value is S3.</p> <note> <p>After the server is created, the domain cannot be changed.</p> </note>"
},
"EndpointDetails":{
"shape":"EndpointDetails",
"documentation":"<p>The virtual private cloud (VPC) endpoint settings that are configured for your server. When you host your endpoint within your VPC, you can make your endpoint accessible only to resources within your VPC, or you can attach Elastic IP addresses and make your endpoint accessible to clients over the internet. Your VPC's default security groups are automatically assigned to your endpoint.</p>"
},
"EndpointType":{
"shape":"EndpointType",
"documentation":"<p>The type of endpoint that you want your server to use. You can choose to make your server's endpoint publicly accessible (PUBLIC) or host it inside your VPC. With an endpoint that is hosted in a VPC, you can restrict access to your server and resources only within your VPC or choose to make it internet facing by attaching Elastic IP addresses directly to it.</p> <note> <p> After May 19, 2021, you won't be able to create a server using <code>EndpointType=VPC_ENDPOINT</code> in your Amazon Web Services account if your account hasn't already done so before May 19, 2021. If you have already created servers with <code>EndpointType=VPC_ENDPOINT</code> in your Amazon Web Services account on or before May 19, 2021, you will not be affected. After this date, use <code>EndpointType</code>=<code>VPC</code>.</p> <p>For more information, see https://docs.aws.amazon.com/transfer/latest/userguide/create-server-in-vpc.html#deprecate-vpc-endpoint.</p> <p>It is recommended that you use <code>VPC</code> as the <code>EndpointType</code>. With this endpoint type, you have the option to directly associate up to three Elastic IPv4 addresses (BYO IP included) with your server's endpoint and use VPC security groups to restrict traffic by the client's public IP address. This is not possible with <code>EndpointType</code> set to <code>VPC_ENDPOINT</code>.</p> </note>"
},
"HostKey":{
"shape":"HostKey",
"documentation":"<p>The RSA, ECDSA, or ED25519 private key to use for your SFTP-enabled server. You can add multiple host keys, in case you want to rotate keys, or have a set of active keys that use different algorithms.</p> <p>Use the following command to generate an RSA 2048 bit key with no passphrase:</p> <p> <code>ssh-keygen -t rsa -b 2048 -N \"\" -m PEM -f my-new-server-key</code>.</p> <p>Use a minimum value of 2048 for the <code>-b</code> option. You can create a stronger key by using 3072 or 4096.</p> <p>Use the following command to generate an ECDSA 256 bit key with no passphrase:</p> <p> <code>ssh-keygen -t ecdsa -b 256 -N \"\" -m PEM -f my-new-server-key</code>.</p> <p>Valid values for the <code>-b</code> option for ECDSA are 256, 384, and 521.</p> <p>Use the following command to generate an ED25519 key with no passphrase:</p> <p> <code>ssh-keygen -t ed25519 -N \"\" -f my-new-server-key</code>.</p> <p>For all of these commands, you can replace <i>my-new-server-key</i> with a string of your choice.</p> <important> <p>If you aren't planning to migrate existing users from an existing SFTP-enabled server to a new server, don't update the host key. Accidentally changing a server's host key can be disruptive.</p> </important> <p>For more information, see <a href=\"https://docs.aws.amazon.com/transfer/latest/userguide/edit-server-config.html#configuring-servers-change-host-key\">Update host keys for your SFTP-enabled server</a> in the <i>Transfer Family User Guide</i>.</p>"
},
"IdentityProviderDetails":{
"shape":"IdentityProviderDetails",
"documentation":"<p>Required when <code>IdentityProviderType</code> is set to <code>AWS_DIRECTORY_SERVICE</code> or <code>API_GATEWAY</code>. Accepts an array containing all of the information required to use a directory in <code>AWS_DIRECTORY_SERVICE</code> or invoke a customer-supplied authentication API, including the API Gateway URL. Not required when <code>IdentityProviderType</code> is set to <code>SERVICE_MANAGED</code>.</p>"
},
"IdentityProviderType":{
"shape":"IdentityProviderType",
"documentation":"<p>The mode of authentication for a server. The default value is <code>SERVICE_MANAGED</code>, which allows you to store and access user credentials within the Transfer Family service.</p> <p>Use <code>AWS_DIRECTORY_SERVICE</code> to provide access to Active Directory groups in Directory Service for Microsoft Active Directory or Microsoft Active Directory in your on-premises environment or in Amazon Web Services using AD Connector. This option also requires you to provide a Directory ID by using the <code>IdentityProviderDetails</code> parameter.</p> <p>Use the <code>API_GATEWAY</code> value to integrate with an identity provider of your choosing. The <code>API_GATEWAY</code> setting requires you to provide an Amazon API Gateway endpoint URL to call for authentication by using the <code>IdentityProviderDetails</code> parameter.</p> <p>Use the <code>AWS_LAMBDA</code> value to directly use an Lambda function as your identity provider. If you choose this value, you must specify the ARN for the Lambda function in the <code>Function</code> parameter or the <code>IdentityProviderDetails</code> data type.</p>"
},
"LoggingRole":{
"shape":"Role",
"documentation":"<p>The Amazon Resource Name (ARN) of the Identity and Access Management (IAM) role that allows a server to turn on Amazon CloudWatch logging for Amazon S3 or Amazon EFSevents. When set, you can view user activity in your CloudWatch logs.</p>"
},
"PostAuthenticationLoginBanner":{
"shape":"PostAuthenticationLoginBanner",
"documentation":"<p>Specifies a string to display when users connect to a server. This string is displayed after the user authenticates.</p> <note> <p>The SFTP protocol does not support post-authentication display banners.</p> </note>"
},
"PreAuthenticationLoginBanner":{
"shape":"PreAuthenticationLoginBanner",
"documentation":"<p>Specifies a string to display when users connect to a server. This string is displayed before the user authenticates. For example, the following banner displays details about using the system:</p> <p> <code>This system is for the use of authorized users only. Individuals using this computer system without authority, or in excess of their authority, are subject to having all of their activities on this system monitored and recorded by system personnel.</code> </p>"
},
"Protocols":{
"shape":"Protocols",
"documentation":"<p>Specifies the file transfer protocol or protocols over which your file transfer protocol client can connect to your server's endpoint. The available protocols are:</p> <ul> <li> <p> <code>SFTP</code> (Secure Shell (SSH) File Transfer Protocol): File transfer over SSH</p> </li> <li> <p> <code>FTPS</code> (File Transfer Protocol Secure): File transfer with TLS encryption</p> </li> <li> <p> <code>FTP</code> (File Transfer Protocol): Unencrypted file transfer</p> </li> <li> <p> <code>AS2</code> (Applicability Statement 2): used for transporting structured business-to-business data</p> </li> </ul> <note> <ul> <li> <p>If you select <code>FTPS</code>, you must choose a certificate stored in Certificate Manager (ACM) which is used to identify your server when clients connect to it over FTPS.</p> </li> <li> <p>If <code>Protocol</code> includes either <code>FTP</code> or <code>FTPS</code>, then the <code>EndpointType</code> must be <code>VPC</code> and the <code>IdentityProviderType</code> must be <code>AWS_DIRECTORY_SERVICE</code> or <code>API_GATEWAY</code>.</p> </li> <li> <p>If <code>Protocol</code> includes <code>FTP</code>, then <code>AddressAllocationIds</code> cannot be associated.</p> </li> <li> <p>If <code>Protocol</code> is set only to <code>SFTP</code>, the <code>EndpointType</code> can be set to <code>PUBLIC</code> and the <code>IdentityProviderType</code> can be set to <code>SERVICE_MANAGED</code>.</p> </li> <li> <p>If <code>Protocol</code> includes <code>AS2</code>, then the <code>EndpointType</code> must be <code>VPC</code>, and domain must be Amazon S3.</p> </li> </ul> </note>"
},
"ProtocolDetails":{
"shape":"ProtocolDetails",
"documentation":"<p>The protocol settings that are configured for your server.</p> <ul> <li> <p> To indicate passive mode (for FTP and FTPS protocols), use the <code>PassiveIp</code> parameter. Enter a single dotted-quad IPv4 address, such as the external IP address of a firewall, router, or load balancer. </p> </li> <li> <p>To ignore the error that is generated when the client attempts to use the <code>SETSTAT</code> command on a file that you are uploading to an Amazon S3 bucket, use the <code>SetStatOption</code> parameter. To have the Transfer Family server ignore the <code>SETSTAT</code> command and upload files without needing to make any changes to your SFTP client, set the value to <code>ENABLE_NO_OP</code>. If you set the <code>SetStatOption</code> parameter to <code>ENABLE_NO_OP</code>, Transfer Family generates a log entry to Amazon CloudWatch Logs, so that you can determine when the client is making a <code>SETSTAT</code> call.</p> </li> <li> <p>To determine whether your Transfer Family server resumes recent, negotiated sessions through a unique session ID, use the <code>TlsSessionResumptionMode</code> parameter.</p> </li> <li> <p> <code>As2Transports</code> indicates the transport method for the AS2 messages. Currently, only HTTP is supported.</p> </li> </ul>"
},
"SecurityPolicyName":{
"shape":"SecurityPolicyName",
"documentation":"<p>Specifies the name of the security policy that is attached to the server.</p>"
},
"Tags":{
"shape":"Tags",
"documentation":"<p>Key-value pairs that can be used to group and search for servers.</p>"
},
"WorkflowDetails":{
"shape":"WorkflowDetails",
"documentation":"<p>Specifies the workflow ID for the workflow to assign and the execution role that's used for executing the workflow.</p> <p>In addition to a workflow to execute when a file is uploaded completely, <code>WorkflowDetails</code> can also contain a workflow ID (and execution role) for a workflow to execute on partial upload. A partial upload occurs when a file is open when the session disconnects.</p>"
}
}
},
"CreateServerResponse":{
"type":"structure",
"required":["ServerId"],
"members":{
"ServerId":{
"shape":"ServerId",
"documentation":"<p>The service-assigned identifier of the server that is created.</p>"
}
}
},
"CreateUserRequest":{
"type":"structure",
"required":[
"Role",
"ServerId",
"UserName"
],
"members":{
"HomeDirectory":{
"shape":"HomeDirectory",
"documentation":"<p>The landing directory (folder) for a user when they log in to the server using the client.</p> <p>A <code>HomeDirectory</code> example is <code>/bucket_name/home/mydirectory</code>.</p>"
},
"HomeDirectoryType":{
"shape":"HomeDirectoryType",
"documentation":"<p>The type of landing directory (folder) that you want your users' home directory to be when they log in to the server. If you set it to <code>PATH</code>, the user will see the absolute Amazon S3 bucket or EFS paths as is in their file transfer protocol clients. If you set it <code>LOGICAL</code>, you need to provide mappings in the <code>HomeDirectoryMappings</code> for how you want to make Amazon S3 or Amazon EFS paths visible to your users.</p>"
},
"HomeDirectoryMappings":{
"shape":"HomeDirectoryMappings",
"documentation":"<p>Logical directory mappings that specify what Amazon S3 or Amazon EFS paths and keys should be visible to your user and how you want to make them visible. You must specify the <code>Entry</code> and <code>Target</code> pair, where <code>Entry</code> shows how the path is made visible and <code>Target</code> is the actual Amazon S3 or Amazon EFS path. If you only specify a target, it is displayed as is. You also must ensure that your Identity and Access Management (IAM) role provides access to paths in <code>Target</code>. This value can be set only when <code>HomeDirectoryType</code> is set to <i>LOGICAL</i>.</p> <p>The following is an <code>Entry</code> and <code>Target</code> pair example.</p> <p> <code>[ { \"Entry\": \"/directory1\", \"Target\": \"/bucket_name/home/mydirectory\" } ]</code> </p> <p>In most cases, you can use this value instead of the session policy to lock your user down to the designated home directory (\"<code>chroot</code>\"). To do this, you can set <code>Entry</code> to <code>/</code> and set <code>Target</code> to the HomeDirectory parameter value.</p> <p>The following is an <code>Entry</code> and <code>Target</code> pair example for <code>chroot</code>.</p> <p> <code>[ { \"Entry\": \"/\", \"Target\": \"/bucket_name/home/mydirectory\" } ]</code> </p>"
},
"Policy":{
"shape":"Policy",
"documentation":"<p>A session policy for your user so that you can use the same Identity and Access Management (IAM) role across multiple users. This policy scopes down a user's access to portions of their Amazon S3 bucket. Variables that you can use inside this policy include <code>${Transfer:UserName}</code>, <code>${Transfer:HomeDirectory}</code>, and <code>${Transfer:HomeBucket}</code>.</p> <note> <p>This policy applies only when the domain of <code>ServerId</code> is Amazon S3. Amazon EFS does not use session policies.</p> <p>For session policies, Transfer Family stores the policy as a JSON blob, instead of the Amazon Resource Name (ARN) of the policy. You save the policy as a JSON blob and pass it in the <code>Policy</code> argument.</p> <p>For an example of a session policy, see <a href=\"https://docs.aws.amazon.com/transfer/latest/userguide/session-policy.html\">Example session policy</a>.</p> <p>For more information, see <a href=\"https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html\">AssumeRole</a> in the <i>Amazon Web Services Security Token Service API Reference</i>.</p> </note>"
},
"PosixProfile":{
"shape":"PosixProfile",
"documentation":"<p>Specifies the full POSIX identity, including user ID (<code>Uid</code>), group ID (<code>Gid</code>), and any secondary groups IDs (<code>SecondaryGids</code>), that controls your users' access to your Amazon EFS file systems. The POSIX permissions that are set on files and directories in Amazon EFS determine the level of access your users get when transferring files into and out of your Amazon EFS file systems.</p>"
},
"Role":{
"shape":"Role",
"documentation":"<p>The Amazon Resource Name (ARN) of the Identity and Access Management (IAM) role that controls your users' access to your Amazon S3 bucket or Amazon EFS file system. The policies attached to this role determine the level of access that you want to provide your users when transferring files into and out of your Amazon S3 bucket or Amazon EFS file system. The IAM role should also contain a trust relationship that allows the server to access your resources when servicing your users' transfer requests.</p>"
},
"ServerId":{
"shape":"ServerId",
"documentation":"<p>A system-assigned unique identifier for a server instance. This is the specific server that you added your user to.</p>"
},
"SshPublicKeyBody":{
"shape":"SshPublicKeyBody",
"documentation":"<p>The public portion of the Secure Shell (SSH) key used to authenticate the user to the server.</p> <p>The three standard SSH public key format elements are <code>&lt;key type&gt;</code>, <code>&lt;body base64&gt;</code>, and an optional <code>&lt;comment&gt;</code>, with spaces between each element.</p> <p>Transfer Family accepts RSA, ECDSA, and ED25519 keys.</p> <ul> <li> <p>For RSA keys, the key type is <code>ssh-rsa</code>.</p> </li> <li> <p>For ED25519 keys, the key type is <code>ssh-ed25519</code>.</p> </li> <li> <p>For ECDSA keys, the key type is either <code>ecdsa-sha2-nistp256</code>, <code>ecdsa-sha2-nistp384</code>, or <code>ecdsa-sha2-nistp521</code>, depending on the size of the key you generated.</p> </li> </ul>"
},
"Tags":{
"shape":"Tags",
"documentation":"<p>Key-value pairs that can be used to group and search for users. Tags are metadata attached to users for any purpose.</p>"
},
"UserName":{
"shape":"UserName",
"documentation":"<p>A unique string that identifies a user and is associated with a <code>ServerId</code>. This user name must be a minimum of 3 and a maximum of 100 characters long. The following are valid characters: a-z, A-Z, 0-9, underscore '_', hyphen '-', period '.', and at sign '@'. The user name can't start with a hyphen, period, or at sign.</p>"
}
}
},
"CreateUserResponse":{
"type":"structure",
"required":[
"ServerId",
"UserName"
],
"members":{
"ServerId":{
"shape":"ServerId",
"documentation":"<p>The identifier of the server that the user is attached to.</p>"
},
"UserName":{
"shape":"UserName",
"documentation":"<p>A unique string that identifies a user account associated with a server.</p>"
}
}
},
"CreateWorkflowRequest":{
"type":"structure",
"required":["Steps"],
"members":{
"Description":{
"shape":"WorkflowDescription",
"documentation":"<p>A textual description for the workflow.</p>"
},
"Steps":{
"shape":"WorkflowSteps",
"documentation":"<p>Specifies the details for the steps that are in the specified workflow.</p> <p> The <code>TYPE</code> specifies which of the following actions is being taken for this step. </p> <ul> <li> <p> <i>COPY</i>: Copy the file to another location.</p> </li> <li> <p> <i>CUSTOM</i>: Perform a custom step with an Lambda function target.</p> </li> <li> <p> <i>DELETE</i>: Delete the file.</p> </li> <li> <p> <i>TAG</i>: Add a tag to the file.</p> </li> </ul> <note> <p> Currently, copying and tagging are supported only on S3. </p> </note> <p> For file location, you specify either the S3 bucket and key, or the EFS file system ID and path. </p>"
},
"OnExceptionSteps":{
"shape":"WorkflowSteps",
"documentation":"<p>Specifies the steps (actions) to take if errors are encountered during execution of the workflow.</p> <note> <p>For custom steps, the lambda function needs to send <code>FAILURE</code> to the call back API to kick off the exception steps. Additionally, if the lambda does not send <code>SUCCESS</code> before it times out, the exception steps are executed.</p> </note>"
},
"Tags":{
"shape":"Tags",
"documentation":"<p>Key-value pairs that can be used to group and search for workflows. Tags are metadata attached to workflows for any purpose.</p>"
}
}
},
"CreateWorkflowResponse":{
"type":"structure",
"required":["WorkflowId"],
"members":{
"WorkflowId":{
"shape":"WorkflowId",
"documentation":"<p>A unique identifier for the workflow.</p>"
}
}
},
"CustomStepDetails":{
"type":"structure",
"members":{
"Name":{
"shape":"WorkflowStepName",
"documentation":"<p>The name of the step, used as an identifier.</p>"
},
"Target":{
"shape":"CustomStepTarget",
"documentation":"<p>The ARN for the lambda function that is being called.</p>"
},
"TimeoutSeconds":{
"shape":"CustomStepTimeoutSeconds",
"documentation":"<p>Timeout, in seconds, for the step.</p>"
},
"SourceFileLocation":{
"shape":"SourceFileLocation",
"documentation":"<p>Specifies which file to use as input to the workflow step: either the output from the previous step, or the originally uploaded file for the workflow.</p> <ul> <li> <p>Enter <code>${previous.file}</code> to use the previous file as the input. In this case, this workflow step uses the output file from the previous workflow step as input. This is the default value.</p> </li> <li> <p>Enter <code>${original.file}</code> to use the originally-uploaded file location as input for this step.</p> </li> </ul>"
}
},
"documentation":"<p>Each step type has its own <code>StepDetails</code> structure.</p>"
},
"CustomStepStatus":{
"type":"string",
"enum":[
"SUCCESS",
"FAILURE"
]
},
"CustomStepTarget":{
"type":"string",
"max":170,
"pattern":"arn:[a-z-]+:lambda:.*$"
},
"CustomStepTimeoutSeconds":{
"type":"integer",
"max":1800,
"min":1
},
"DateImported":{"type":"timestamp"},
"DeleteAccessRequest":{
"type":"structure",
"required":[
"ServerId",
"ExternalId"
],
"members":{
"ServerId":{
"shape":"ServerId",
"documentation":"<p>A system-assigned unique identifier for a server that has this user assigned.</p>"
},
"ExternalId":{
"shape":"ExternalId",
"documentation":"<p>A unique identifier that is required to identify specific groups within your directory. The users of the group that you associate have access to your Amazon S3 or Amazon EFS resources over the enabled protocols using Transfer Family. If you know the group name, you can view the SID values by running the following command using Windows PowerShell.</p> <p> <code>Get-ADGroup -Filter {samAccountName -like \"<i>YourGroupName</i>*\"} -Properties * | Select SamAccountName,ObjectSid</code> </p> <p>In that command, replace <i>YourGroupName</i> with the name of your Active Directory group.</p> <p>The regular expression used to validate this parameter is a string of characters consisting of uppercase and lowercase alphanumeric characters with no spaces. You can also include underscores or any of the following characters: =,.@:/-</p>"
}
}
},
"DeleteAgreementRequest":{
"type":"structure",
"required":[
"AgreementId",
"ServerId"
],
"members":{
"AgreementId":{
"shape":"AgreementId",
"documentation":"<p>A unique identifier for the agreement. This identifier is returned when you create an agreement.</p>"
},
"ServerId":{
"shape":"ServerId",
"documentation":"<p>The server identifier associated with the agreement that you are deleting.</p>"
}
}
},
"DeleteCertificateRequest":{
"type":"structure",
"required":["CertificateId"],
"members":{
"CertificateId":{
"shape":"CertificateId",
"documentation":"<p>The identifier of the certificate object that you are deleting.</p>"
}
}
},
"DeleteConnectorRequest":{
"type":"structure",
"required":["ConnectorId"],
"members":{
"ConnectorId":{
"shape":"ConnectorId",
"documentation":"<p>The unique identifier for the connector.</p>"
}
}
},
"DeleteHostKeyRequest":{
"type":"structure",
"required":[
"ServerId",
"HostKeyId"
],
"members":{
"ServerId":{
"shape":"ServerId",
"documentation":"<p>The identifier of the server that contains the host key that you are deleting.</p>"
},
"HostKeyId":{
"shape":"HostKeyId",
"documentation":"<p>The identifier of the host key that you are deleting.</p>"
}
}
},
"DeleteProfileRequest":{
"type":"structure",
"required":["ProfileId"],
"members":{
"ProfileId":{
"shape":"ProfileId",
"documentation":"<p>The identifier of the profile that you are deleting.</p>"
}
}
},
"DeleteServerRequest":{
"type":"structure",
"required":["ServerId"],
"members":{
"ServerId":{
"shape":"ServerId",
"documentation":"<p>A unique system-assigned identifier for a server instance.</p>"
}
}
},
"DeleteSshPublicKeyRequest":{
"type":"structure",
"required":[
"ServerId",
"SshPublicKeyId",
"UserName"
],
"members":{
"ServerId":{
"shape":"ServerId",
"documentation":"<p>A system-assigned unique identifier for a file transfer protocol-enabled server instance that has the user assigned to it.</p>"
},
"SshPublicKeyId":{
"shape":"SshPublicKeyId",
"documentation":"<p>A unique identifier used to reference your user's specific SSH key.</p>"
},
"UserName":{
"shape":"UserName",
"documentation":"<p>A unique string that identifies a user whose public key is being deleted.</p>"
}
}
},
"DeleteStepDetails":{
"type":"structure",
"members":{
"Name":{
"shape":"WorkflowStepName",
"documentation":"<p>The name of the step, used as an identifier.</p>"
},
"SourceFileLocation":{
"shape":"SourceFileLocation",
"documentation":"<p>Specifies which file to use as input to the workflow step: either the output from the previous step, or the originally uploaded file for the workflow.</p> <ul> <li> <p>Enter <code>${previous.file}</code> to use the previous file as the input. In this case, this workflow step uses the output file from the previous workflow step as input. This is the default value.</p> </li> <li> <p>Enter <code>${original.file}</code> to use the originally-uploaded file location as input for this step.</p> </li> </ul>"
}
},
"documentation":"<p>The name of the step, used to identify the delete step.</p>"
},
"DeleteUserRequest":{
"type":"structure",
"required":[
"ServerId",
"UserName"
],
"members":{
"ServerId":{
"shape":"ServerId",
"documentation":"<p>A system-assigned unique identifier for a server instance that has the user assigned to it.</p>"
},
"UserName":{
"shape":"UserName",
"documentation":"<p>A unique string that identifies a user that is being deleted from a server.</p>"
}
}
},
"DeleteWorkflowRequest":{
"type":"structure",
"required":["WorkflowId"],
"members":{
"WorkflowId":{
"shape":"WorkflowId",
"documentation":"<p>A unique identifier for the workflow.</p>"
}
}
},
"DescribeAccessRequest":{
"type":"structure",
"required":[
"ServerId",
"ExternalId"
],
"members":{
"ServerId":{
"shape":"ServerId",
"documentation":"<p>A system-assigned unique identifier for a server that has this access assigned.</p>"
},
"ExternalId":{
"shape":"ExternalId",
"documentation":"<p>A unique identifier that is required to identify specific groups within your directory. The users of the group that you associate have access to your Amazon S3 or Amazon EFS resources over the enabled protocols using Transfer Family. If you know the group name, you can view the SID values by running the following command using Windows PowerShell.</p> <p> <code>Get-ADGroup -Filter {samAccountName -like \"<i>YourGroupName</i>*\"} -Properties * | Select SamAccountName,ObjectSid</code> </p> <p>In that command, replace <i>YourGroupName</i> with the name of your Active Directory group.</p> <p>The regular expression used to validate this parameter is a string of characters consisting of uppercase and lowercase alphanumeric characters with no spaces. You can also include underscores or any of the following characters: =,.@:/-</p>"
}
}
},
"DescribeAccessResponse":{
"type":"structure",
"required":[
"ServerId",
"Access"
],
"members":{
"ServerId":{
"shape":"ServerId",
"documentation":"<p>A system-assigned unique identifier for a server that has this access assigned.</p>"
},
"Access":{
"shape":"DescribedAccess",
"documentation":"<p>The external identifier of the server that the access is attached to.</p>"
}
}
},
"DescribeAgreementRequest":{
"type":"structure",
"required":[
"AgreementId",
"ServerId"
],
"members":{
"AgreementId":{
"shape":"AgreementId",
"documentation":"<p>A unique identifier for the agreement. This identifier is returned when you create an agreement.</p>"
},
"ServerId":{
"shape":"ServerId",
"documentation":"<p>The server identifier that's associated with the agreement.</p>"
}
}
},
"DescribeAgreementResponse":{
"type":"structure",
"required":["Agreement"],
"members":{
"Agreement":{
"shape":"DescribedAgreement",
"documentation":"<p>The details for the specified agreement, returned as a <code>DescribedAgreement</code> object.</p>"
}
}
},
"DescribeCertificateRequest":{
"type":"structure",
"required":["CertificateId"],
"members":{
"CertificateId":{
"shape":"CertificateId",
"documentation":"<p>An array of identifiers for the imported certificates. You use this identifier for working with profiles and partner profiles.</p>"
}
}
},
"DescribeCertificateResponse":{
"type":"structure",
"required":["Certificate"],
"members":{
"Certificate":{
"shape":"DescribedCertificate",
"documentation":"<p>The details for the specified certificate, returned as an object.</p>"
}
}
},
"DescribeConnectorRequest":{
"type":"structure",
"required":["ConnectorId"],
"members":{
"ConnectorId":{
"shape":"ConnectorId",
"documentation":"<p>The unique identifier for the connector.</p>"
}
}
},
"DescribeConnectorResponse":{
"type":"structure",
"required":["Connector"],
"members":{
"Connector":{
"shape":"DescribedConnector",
"documentation":"<p>The structure that contains the details of the connector.</p>"
}
}
},
"DescribeExecutionRequest":{
"type":"structure",
"required":[
"ExecutionId",
"WorkflowId"
],
"members":{
"ExecutionId":{
"shape":"ExecutionId",
"documentation":"<p>A unique identifier for the execution of a workflow.</p>"
},
"WorkflowId":{
"shape":"WorkflowId",
"documentation":"<p>A unique identifier for the workflow.</p>"
}
}
},
"DescribeExecutionResponse":{
"type":"structure",
"required":[
"WorkflowId",
"Execution"
],
"members":{
"WorkflowId":{
"shape":"WorkflowId",
"documentation":"<p>A unique identifier for the workflow.</p>"
},
"Execution":{
"shape":"DescribedExecution",
"documentation":"<p>The structure that contains the details of the workflow' execution.</p>"
}
}
},
"DescribeHostKeyRequest":{
"type":"structure",
"required":[
"ServerId",
"HostKeyId"
],
"members":{
"ServerId":{
"shape":"ServerId",
"documentation":"<p>The identifier of the server that contains the host key that you want described.</p>"
},
"HostKeyId":{
"shape":"HostKeyId",
"documentation":"<p>The identifier of the host key that you want described.</p>"
}
}
},
"DescribeHostKeyResponse":{
"type":"structure",
"required":["HostKey"],
"members":{
"HostKey":{
"shape":"DescribedHostKey",
"documentation":"<p>Returns the details for the specified host key.</p>"
}
}
},
"DescribeProfileRequest":{
"type":"structure",
"required":["ProfileId"],
"members":{
"ProfileId":{
"shape":"ProfileId",
"documentation":"<p>The identifier of the profile that you want described.</p>"
}
}
},
"DescribeProfileResponse":{
"type":"structure",
"required":["Profile"],
"members":{
"Profile":{
"shape":"DescribedProfile",
"documentation":"<p>The details of the specified profile, returned as an object.</p>"
}
}
},
"DescribeSecurityPolicyRequest":{
"type":"structure",
"required":["SecurityPolicyName"],
"members":{
"SecurityPolicyName":{
"shape":"SecurityPolicyName",
"documentation":"<p>Specifies the name of the security policy that is attached to the server.</p>"
}
}
},
"DescribeSecurityPolicyResponse":{
"type":"structure",
"required":["SecurityPolicy"],
"members":{
"SecurityPolicy":{
"shape":"DescribedSecurityPolicy",
"documentation":"<p>An array containing the properties of the security policy.</p>"
}
}
},
"DescribeServerRequest":{
"type":"structure",
"required":["ServerId"],
"members":{
"ServerId":{
"shape":"ServerId",
"documentation":"<p>A system-assigned unique identifier for a server.</p>"
}
}
},
"DescribeServerResponse":{
"type":"structure",
"required":["Server"],
"members":{
"Server":{
"shape":"DescribedServer",
"documentation":"<p>An array containing the properties of a server with the <code>ServerID</code> you specified.</p>"
}
}
},
"DescribeUserRequest":{
"type":"structure",
"required":[
"ServerId",
"UserName"
],
"members":{
"ServerId":{
"shape":"ServerId",
"documentation":"<p>A system-assigned unique identifier for a server that has this user assigned.</p>"
},
"UserName":{
"shape":"UserName",
"documentation":"<p>The name of the user assigned to one or more servers. User names are part of the sign-in credentials to use the Transfer Family service and perform file transfer tasks.</p>"
}
}
},
"DescribeUserResponse":{
"type":"structure",
"required":[
"ServerId",
"User"
],
"members":{
"ServerId":{
"shape":"ServerId",
"documentation":"<p>A system-assigned unique identifier for a server that has this user assigned.</p>"
},
"User":{
"shape":"DescribedUser",
"documentation":"<p>An array containing the properties of the user account for the <code>ServerID</code> value that you specified.</p>"
}
}
},
"DescribeWorkflowRequest":{
"type":"structure",
"required":["WorkflowId"],
"members":{
"WorkflowId":{
"shape":"WorkflowId",
"documentation":"<p>A unique identifier for the workflow.</p>"
}
}
},
"DescribeWorkflowResponse":{
"type":"structure",
"required":["Workflow"],
"members":{
"Workflow":{
"shape":"DescribedWorkflow",
"documentation":"<p>The structure that contains the details of the workflow.</p>"
}
}
},
"DescribedAccess":{
"type":"structure",
"members":{
"HomeDirectory":{
"shape":"HomeDirectory",
"documentation":"<p>The landing directory (folder) for a user when they log in to the server using the client.</p> <p>A <code>HomeDirectory</code> example is <code>/bucket_name/home/mydirectory</code>.</p>"
},
"HomeDirectoryMappings":{
"shape":"HomeDirectoryMappings",
"documentation":"<p>Logical directory mappings that specify what Amazon S3 or Amazon EFS paths and keys should be visible to your user and how you want to make them visible. You must specify the <code>Entry</code> and <code>Target</code> pair, where <code>Entry</code> shows how the path is made visible and <code>Target</code> is the actual Amazon S3 or Amazon EFS path. If you only specify a target, it is displayed as is. You also must ensure that your Identity and Access Management (IAM) role provides access to paths in <code>Target</code>. This value can be set only when <code>HomeDirectoryType</code> is set to <i>LOGICAL</i>.</p> <p>In most cases, you can use this value instead of the session policy to lock down the associated access to the designated home directory (\"<code>chroot</code>\"). To do this, you can set <code>Entry</code> to '/' and set <code>Target</code> to the <code>HomeDirectory</code> parameter value.</p>"
},
"HomeDirectoryType":{
"shape":"HomeDirectoryType",
"documentation":"<p>The type of landing directory (folder) that you want your users' home directory to be when they log in to the server. If you set it to <code>PATH</code>, the user will see the absolute Amazon S3 bucket or EFS paths as is in their file transfer protocol clients. If you set it <code>LOGICAL</code>, you need to provide mappings in the <code>HomeDirectoryMappings</code> for how you want to make Amazon S3 or Amazon EFS paths visible to your users.</p>"
},
"Policy":{
"shape":"Policy",
"documentation":"<p>A session policy for your user so that you can use the same Identity and Access Management (IAM) role across multiple users. This policy scopes down a user's access to portions of their Amazon S3 bucket. Variables that you can use inside this policy include <code>${Transfer:UserName}</code>, <code>${Transfer:HomeDirectory}</code>, and <code>${Transfer:HomeBucket}</code>.</p>"
},
"PosixProfile":{"shape":"PosixProfile"},
"Role":{
"shape":"Role",
"documentation":"<p>The Amazon Resource Name (ARN) of the Identity and Access Management (IAM) role that controls your users' access to your Amazon S3 bucket or Amazon EFS file system. The policies attached to this role determine the level of access that you want to provide your users when transferring files into and out of your Amazon S3 bucket or Amazon EFS file system. The IAM role should also contain a trust relationship that allows the server to access your resources when servicing your users' transfer requests.</p>"
},
"ExternalId":{
"shape":"ExternalId",
"documentation":"<p>A unique identifier that is required to identify specific groups within your directory. The users of the group that you associate have access to your Amazon S3 or Amazon EFS resources over the enabled protocols using Transfer Family. If you know the group name, you can view the SID values by running the following command using Windows PowerShell.</p> <p> <code>Get-ADGroup -Filter {samAccountName -like \"<i>YourGroupName</i>*\"} -Properties * | Select SamAccountName,ObjectSid</code> </p> <p>In that command, replace <i>YourGroupName</i> with the name of your Active Directory group.</p> <p>The regular expression used to validate this parameter is a string of characters consisting of uppercase and lowercase alphanumeric characters with no spaces. You can also include underscores or any of the following characters: =,.@:/-</p>"
}
},
"documentation":"<p>Describes the properties of the access that was specified.</p>"
},
"DescribedAgreement":{
"type":"structure",
"required":["Arn"],
"members":{
"Arn":{
"shape":"Arn",
"documentation":"<p>The unique Amazon Resource Name (ARN) for the agreement.</p>"
},
"AgreementId":{
"shape":"AgreementId",
"documentation":"<p>A unique identifier for the agreement. This identifier is returned when you create an agreement.</p>"
},
"Description":{
"shape":"Description",
"documentation":"<p>The name or short description that's used to identify the agreement.</p>"
},
"Status":{
"shape":"AgreementStatusType",
"documentation":"<p>The current status of the agreement, either <code>ACTIVE</code> or <code>INACTIVE</code>.</p>"
},
"ServerId":{
"shape":"ServerId",
"documentation":"<p>A system-assigned unique identifier for a server instance. This identifier indicates the specific server that the agreement uses.</p>"
},
"LocalProfileId":{
"shape":"ProfileId",
"documentation":"<p>A unique identifier for the AS2 local profile.</p>"
},
"PartnerProfileId":{
"shape":"ProfileId",
"documentation":"<p>A unique identifier for the partner profile used in the agreement.</p>"
},
"BaseDirectory":{
"shape":"HomeDirectory",
"documentation":"<p>The landing directory (folder) for files that are transferred by using the AS2 protocol.</p>"
},
"AccessRole":{
"shape":"Role",
"documentation":"<p>With AS2, you can send files by calling <code>StartFileTransfer</code> and specifying the file paths in the request parameter, <code>SendFilePaths</code>. We use the files parent directory (for example, for <code>--send-file-paths /bucket/dir/file.txt</code>, parent directory is <code>/bucket/dir/</code>) to temporarily store a processed AS2 message file, store the MDN when we receive them from the partner, and write a final JSON file containing relevant metadata of the transmission. So, the <code>AccessRole</code> needs to provide read and write access to the parent directory of the file location used in the <code>StartFileTransfer</code> request. Additionally, you need to provide read and write access to the parent directory of the files that you intend to send with <code>StartFileTransfer</code>.</p>"
},
"Tags":{
"shape":"Tags",
"documentation":"<p>Key-value pairs that can be used to group and search for agreements.</p>"
}
},
"documentation":"<p>Describes the properties of an agreement.</p>"
},
"DescribedCertificate":{
"type":"structure",
"required":["Arn"],
"members":{
"Arn":{
"shape":"Arn",
"documentation":"<p>The unique Amazon Resource Name (ARN) for the certificate.</p>"
},
"CertificateId":{
"shape":"CertificateId",
"documentation":"<p>An array of identifiers for the imported certificates. You use this identifier for working with profiles and partner profiles.</p>"
},
"Usage":{
"shape":"CertificateUsageType",
"documentation":"<p>Specifies whether this certificate is used for signing or encryption.</p>"
},
"Status":{
"shape":"CertificateStatusType",
"documentation":"<p>The certificate can be either <code>ACTIVE</code>, <code>PENDING_ROTATION</code>, or <code>INACTIVE</code>. <code>PENDING_ROTATION</code> means that this certificate will replace the current certificate when it expires.</p>"
},
"Certificate":{
"shape":"CertificateBodyType",
"documentation":"<p>The file name for the certificate.</p>"
},
"CertificateChain":{
"shape":"CertificateChainType",
"documentation":"<p>The list of certificates that make up the chain for the certificate.</p>"
},
"ActiveDate":{
"shape":"CertDate",
"documentation":"<p>An optional date that specifies when the certificate becomes active.</p>"
},
"InactiveDate":{
"shape":"CertDate",
"documentation":"<p>An optional date that specifies when the certificate becomes inactive.</p>"
},
"Serial":{
"shape":"CertSerial",
"documentation":"<p>The serial number for the certificate.</p>"
},
"NotBeforeDate":{
"shape":"CertDate",
"documentation":"<p>The earliest date that the certificate is valid.</p>"
},
"NotAfterDate":{
"shape":"CertDate",
"documentation":"<p>The final date that the certificate is valid.</p>"
},
"Type":{
"shape":"CertificateType",
"documentation":"<p>If a private key has been specified for the certificate, its type is <code>CERTIFICATE_WITH_PRIVATE_KEY</code>. If there is no private key, the type is <code>CERTIFICATE</code>.</p>"
},
"Description":{
"shape":"Description",
"documentation":"<p>The name or description that's used to identity the certificate. </p>"
},
"Tags":{
"shape":"Tags",
"documentation":"<p>Key-value pairs that can be used to group and search for certificates.</p>"
}
},
"documentation":"<p>Describes the properties of a certificate.</p>"
},
"DescribedConnector":{
"type":"structure",
"required":["Arn"],
"members":{
"Arn":{
"shape":"Arn",
"documentation":"<p>The unique Amazon Resource Name (ARN) for the connector.</p>"
},
"ConnectorId":{
"shape":"ConnectorId",
"documentation":"<p>The unique identifier for the connector.</p>"
},
"Url":{
"shape":"Url",
"documentation":"<p>The URL of the partner's AS2 endpoint.</p>"
},
"As2Config":{
"shape":"As2ConnectorConfig",
"documentation":"<p>A structure that contains the parameters for a connector object.</p>"
},
"AccessRole":{
"shape":"Role",
"documentation":"<p>With AS2, you can send files by calling <code>StartFileTransfer</code> and specifying the file paths in the request parameter, <code>SendFilePaths</code>. We use the files parent directory (for example, for <code>--send-file-paths /bucket/dir/file.txt</code>, parent directory is <code>/bucket/dir/</code>) to temporarily store a processed AS2 message file, store the MDN when we receive them from the partner, and write a final JSON file containing relevant metadata of the transmission. So, the <code>AccessRole</code> needs to provide read and write access to the parent directory of the file location used in the <code>StartFileTransfer</code> request. Additionally, you need to provide read and write access to the parent directory of the files that you intend to send with <code>StartFileTransfer</code>.</p>"
},
"LoggingRole":{
"shape":"Role",
"documentation":"<p>The Amazon Resource Name (ARN) of the Identity and Access Management (IAM) role that allows a connector to turn on CloudWatch logging for Amazon S3 events. When set, you can view connector activity in your CloudWatch logs.</p>"
},
"Tags":{
"shape":"Tags",
"documentation":"<p>Key-value pairs that can be used to group and search for connectors.</p>"
}
},
"documentation":"<p>Describes the parameters for the connector, as identified by the <code>ConnectorId</code>.</p>"
},
"DescribedExecution":{
"type":"structure",
"members":{
"ExecutionId":{
"shape":"ExecutionId",
"documentation":"<p>A unique identifier for the execution of a workflow.</p>"
},
"InitialFileLocation":{
"shape":"FileLocation",
"documentation":"<p>A structure that describes the Amazon S3 or EFS file location. This is the file location when the execution begins: if the file is being copied, this is the initial (as opposed to destination) file location.</p>"
},
"ServiceMetadata":{
"shape":"ServiceMetadata",
"documentation":"<p>A container object for the session details that are associated with a workflow.</p>"
},
"ExecutionRole":{
"shape":"Role",
"documentation":"<p>The IAM role associated with the execution.</p>"
},
"LoggingConfiguration":{
"shape":"LoggingConfiguration",
"documentation":"<p>The IAM logging role associated with the execution.</p>"
},
"PosixProfile":{"shape":"PosixProfile"},
"Status":{
"shape":"ExecutionStatus",
"documentation":"<p>The status is one of the execution. Can be in progress, completed, exception encountered, or handling the exception. </p>"
},
"Results":{
"shape":"ExecutionResults",
"documentation":"<p>A structure that describes the execution results. This includes a list of the steps along with the details of each step, error type and message (if any), and the <code>OnExceptionSteps</code> structure.</p>"
}
},
"documentation":"<p>The details for an execution object.</p>"
},
"DescribedHostKey":{
"type":"structure",
"required":["Arn"],
"members":{
"Arn":{
"shape":"Arn",
"documentation":"<p>The unique Amazon Resource Name (ARN) for the host key.</p>"
},
"HostKeyId":{
"shape":"HostKeyId",
"documentation":"<p>A unique identifier for the host key.</p>"
},
"HostKeyFingerprint":{
"shape":"HostKeyFingerprint",
"documentation":"<p>The public key fingerprint, which is a short sequence of bytes used to identify the longer public key.</p>"
},
"Description":{
"shape":"HostKeyDescription",
"documentation":"<p>The text description for this host key.</p>"
},
"Type":{
"shape":"HostKeyType",
"documentation":"<p>The encryption algorithm that is used for the host key. The <code>Type</code> parameter is specified by using one of the following values:</p> <ul> <li> <p> <code>ssh-rsa</code> </p> </li> <li> <p> <code>ssh-ed25519</code> </p> </li> <li> <p> <code>ecdsa-sha2-nistp256</code> </p> </li> <li> <p> <code>ecdsa-sha2-nistp384</code> </p> </li> <li> <p> <code>ecdsa-sha2-nistp521</code> </p> </li> </ul>"
},
"DateImported":{
"shape":"DateImported",
"documentation":"<p>The date on which the host key was added to the server.</p>"
},
"Tags":{
"shape":"Tags",
"documentation":"<p>Key-value pairs that can be used to group and search for host keys.</p>"
}
},
"documentation":"<p>The details for a server host key.</p>"
},
"DescribedProfile":{
"type":"structure",
"required":["Arn"],
"members":{
"Arn":{
"shape":"Arn",
"documentation":"<p>The unique Amazon Resource Name (ARN) for the profile.</p>"
},
"ProfileId":{
"shape":"ProfileId",
"documentation":"<p>A unique identifier for the local or partner AS2 profile.</p>"
},
"ProfileType":{
"shape":"ProfileType",
"documentation":"<p>Indicates whether to list only <code>LOCAL</code> type profiles or only <code>PARTNER</code> type profiles. If not supplied in the request, the command lists all types of profiles.</p>"
},
"As2Id":{
"shape":"As2Id",
"documentation":"<p>The <code>As2Id</code> is the <i>AS2-name</i>, as defined in the <a href=\"https://datatracker.ietf.org/doc/html/rfc4130\">RFC 4130</a>. For inbound transfers, this is the <code>AS2-From</code> header for the AS2 messages sent from the partner. For outbound connectors, this is the <code>AS2-To</code> header for the AS2 messages sent to the partner using the <code>StartFileTransfer</code> API operation. This ID cannot include spaces.</p>"
},
"CertificateIds":{
"shape":"CertificateIds",
"documentation":"<p>An array of identifiers for the imported certificates. You use this identifier for working with profiles and partner profiles.</p>"
},
"Tags":{
"shape":"Tags",
"documentation":"<p>Key-value pairs that can be used to group and search for profiles.</p>"
}
},
"documentation":"<p>The details for a local or partner AS2 profile. </p>"
},
"DescribedSecurityPolicy":{
"type":"structure",
"required":["SecurityPolicyName"],
"members":{
"Fips":{
"shape":"Fips",
"documentation":"<p>Specifies whether this policy enables Federal Information Processing Standards (FIPS).</p>"
},
"SecurityPolicyName":{
"shape":"SecurityPolicyName",
"documentation":"<p>Specifies the name of the security policy that is attached to the server.</p>"
},
"SshCiphers":{
"shape":"SecurityPolicyOptions",
"documentation":"<p>Specifies the enabled Secure Shell (SSH) cipher encryption algorithms in the security policy that is attached to the server.</p>"
},
"SshKexs":{
"shape":"SecurityPolicyOptions",
"documentation":"<p>Specifies the enabled SSH key exchange (KEX) encryption algorithms in the security policy that is attached to the server.</p>"
},
"SshMacs":{
"shape":"SecurityPolicyOptions",
"documentation":"<p>Specifies the enabled SSH message authentication code (MAC) encryption algorithms in the security policy that is attached to the server.</p>"
},
"TlsCiphers":{
"shape":"SecurityPolicyOptions",
"documentation":"<p>Specifies the enabled Transport Layer Security (TLS) cipher encryption algorithms in the security policy that is attached to the server.</p>"
}
},
"documentation":"<p>Describes the properties of a security policy that was specified. For more information about security policies, see <a href=\"https://docs.aws.amazon.com/transfer/latest/userguide/security-policies.html\">Working with security policies</a>.</p>"
},
"DescribedServer":{
"type":"structure",
"required":["Arn"],
"members":{
"Arn":{
"shape":"Arn",
"documentation":"<p>Specifies the unique Amazon Resource Name (ARN) of the server.</p>"
},
"Certificate":{
"shape":"Certificate",
"documentation":"<p>Specifies the ARN of the Amazon Web ServicesCertificate Manager (ACM) certificate. Required when <code>Protocols</code> is set to <code>FTPS</code>.</p>"
},
"ProtocolDetails":{
"shape":"ProtocolDetails",
"documentation":"<p>The protocol settings that are configured for your server.</p> <ul> <li> <p> To indicate passive mode (for FTP and FTPS protocols), use the <code>PassiveIp</code> parameter. Enter a single dotted-quad IPv4 address, such as the external IP address of a firewall, router, or load balancer. </p> </li> <li> <p>To ignore the error that is generated when the client attempts to use the <code>SETSTAT</code> command on a file that you are uploading to an Amazon S3 bucket, use the <code>SetStatOption</code> parameter. To have the Transfer Family server ignore the <code>SETSTAT</code> command and upload files without needing to make any changes to your SFTP client, set the value to <code>ENABLE_NO_OP</code>. If you set the <code>SetStatOption</code> parameter to <code>ENABLE_NO_OP</code>, Transfer Family generates a log entry to Amazon CloudWatch Logs, so that you can determine when the client is making a <code>SETSTAT</code> call.</p> </li> <li> <p>To determine whether your Transfer Family server resumes recent, negotiated sessions through a unique session ID, use the <code>TlsSessionResumptionMode</code> parameter.</p> </li> <li> <p> <code>As2Transports</code> indicates the transport method for the AS2 messages. Currently, only HTTP is supported.</p> </li> </ul>"
},
"Domain":{
"shape":"Domain",
"documentation":"<p>Specifies the domain of the storage system that is used for file transfers.</p>"
},
"EndpointDetails":{
"shape":"EndpointDetails",
"documentation":"<p>The virtual private cloud (VPC) endpoint settings that are configured for your server. When you host your endpoint within your VPC, you can make your endpoint accessible only to resources within your VPC, or you can attach Elastic IP addresses and make your endpoint accessible to clients over the internet. Your VPC's default security groups are automatically assigned to your endpoint.</p>"
},
"EndpointType":{
"shape":"EndpointType",
"documentation":"<p>Defines the type of endpoint that your server is connected to. If your server is connected to a VPC endpoint, your server isn't accessible over the public internet.</p>"
},
"HostKeyFingerprint":{
"shape":"HostKeyFingerprint",
"documentation":"<p>Specifies the Base64-encoded SHA256 fingerprint of the server's host key. This value is equivalent to the output of the <code>ssh-keygen -l -f my-new-server-key</code> command.</p>"
},
"IdentityProviderDetails":{
"shape":"IdentityProviderDetails",
"documentation":"<p>Specifies information to call a customer-supplied authentication API. This field is not populated when the <code>IdentityProviderType</code> of a server is <code>AWS_DIRECTORY_SERVICE</code> or <code>SERVICE_MANAGED</code>.</p>"
},
"IdentityProviderType":{
"shape":"IdentityProviderType",
"documentation":"<p>The mode of authentication for a server. The default value is <code>SERVICE_MANAGED</code>, which allows you to store and access user credentials within the Transfer Family service.</p> <p>Use <code>AWS_DIRECTORY_SERVICE</code> to provide access to Active Directory groups in Directory Service for Microsoft Active Directory or Microsoft Active Directory in your on-premises environment or in Amazon Web Services using AD Connector. This option also requires you to provide a Directory ID by using the <code>IdentityProviderDetails</code> parameter.</p> <p>Use the <code>API_GATEWAY</code> value to integrate with an identity provider of your choosing. The <code>API_GATEWAY</code> setting requires you to provide an Amazon API Gateway endpoint URL to call for authentication by using the <code>IdentityProviderDetails</code> parameter.</p> <p>Use the <code>AWS_LAMBDA</code> value to directly use an Lambda function as your identity provider. If you choose this value, you must specify the ARN for the Lambda function in the <code>Function</code> parameter or the <code>IdentityProviderDetails</code> data type.</p>"
},
"LoggingRole":{
"shape":"Role",
"documentation":"<p>The Amazon Resource Name (ARN) of the Identity and Access Management (IAM) role that allows a server to turn on Amazon CloudWatch logging for Amazon S3 or Amazon EFSevents. When set, you can view user activity in your CloudWatch logs.</p>"
},
"PostAuthenticationLoginBanner":{
"shape":"PostAuthenticationLoginBanner",
"documentation":"<p>Specifies a string to display when users connect to a server. This string is displayed after the user authenticates.</p> <note> <p>The SFTP protocol does not support post-authentication display banners.</p> </note>"
},
"PreAuthenticationLoginBanner":{
"shape":"PreAuthenticationLoginBanner",
"documentation":"<p>Specifies a string to display when users connect to a server. This string is displayed before the user authenticates. For example, the following banner displays details about using the system:</p> <p> <code>This system is for the use of authorized users only. Individuals using this computer system without authority, or in excess of their authority, are subject to having all of their activities on this system monitored and recorded by system personnel.</code> </p>"
},
"Protocols":{
"shape":"Protocols",
"documentation":"<p>Specifies the file transfer protocol or protocols over which your file transfer protocol client can connect to your server's endpoint. The available protocols are:</p> <ul> <li> <p> <code>SFTP</code> (Secure Shell (SSH) File Transfer Protocol): File transfer over SSH</p> </li> <li> <p> <code>FTPS</code> (File Transfer Protocol Secure): File transfer with TLS encryption</p> </li> <li> <p> <code>FTP</code> (File Transfer Protocol): Unencrypted file transfer</p> </li> <li> <p> <code>AS2</code> (Applicability Statement 2): used for transporting structured business-to-business data</p> </li> </ul> <note> <ul> <li> <p>If you select <code>FTPS</code>, you must choose a certificate stored in Certificate Manager (ACM) which is used to identify your server when clients connect to it over FTPS.</p> </li> <li> <p>If <code>Protocol</code> includes either <code>FTP</code> or <code>FTPS</code>, then the <code>EndpointType</code> must be <code>VPC</code> and the <code>IdentityProviderType</code> must be <code>AWS_DIRECTORY_SERVICE</code> or <code>API_GATEWAY</code>.</p> </li> <li> <p>If <code>Protocol</code> includes <code>FTP</code>, then <code>AddressAllocationIds</code> cannot be associated.</p> </li> <li> <p>If <code>Protocol</code> is set only to <code>SFTP</code>, the <code>EndpointType</code> can be set to <code>PUBLIC</code> and the <code>IdentityProviderType</code> can be set to <code>SERVICE_MANAGED</code>.</p> </li> <li> <p>If <code>Protocol</code> includes <code>AS2</code>, then the <code>EndpointType</code> must be <code>VPC</code>, and domain must be Amazon S3.</p> </li> </ul> </note>"
},
"SecurityPolicyName":{
"shape":"SecurityPolicyName",
"documentation":"<p>Specifies the name of the security policy that is attached to the server.</p>"
},
"ServerId":{
"shape":"ServerId",
"documentation":"<p>Specifies the unique system-assigned identifier for a server that you instantiate.</p>"
},
"State":{
"shape":"State",
"documentation":"<p>The condition of the server that was described. A value of <code>ONLINE</code> indicates that the server can accept jobs and transfer files. A <code>State</code> value of <code>OFFLINE</code> means that the server cannot perform file transfer operations.</p> <p>The states of <code>STARTING</code> and <code>STOPPING</code> indicate that the server is in an intermediate state, either not fully able to respond, or not fully offline. The values of <code>START_FAILED</code> or <code>STOP_FAILED</code> can indicate an error condition.</p>"
},
"Tags":{
"shape":"Tags",
"documentation":"<p>Specifies the key-value pairs that you can use to search for and group servers that were assigned to the server that was described.</p>"
},
"UserCount":{
"shape":"UserCount",
"documentation":"<p>Specifies the number of users that are assigned to a server you specified with the <code>ServerId</code>.</p>"
},
"WorkflowDetails":{
"shape":"WorkflowDetails",
"documentation":"<p>Specifies the workflow ID for the workflow to assign and the execution role that's used for executing the workflow.</p> <p>In addition to a workflow to execute when a file is uploaded completely, <code>WorkflowDetails</code> can also contain a workflow ID (and execution role) for a workflow to execute on partial upload. A partial upload occurs when a file is open when the session disconnects.</p>"
}
},
"documentation":"<p>Describes the properties of a file transfer protocol-enabled server that was specified.</p>"
},
"DescribedUser":{
"type":"structure",
"required":["Arn"],
"members":{
"Arn":{
"shape":"Arn",
"documentation":"<p>Specifies the unique Amazon Resource Name (ARN) for the user that was requested to be described.</p>"
},
"HomeDirectory":{
"shape":"HomeDirectory",
"documentation":"<p>The landing directory (folder) for a user when they log in to the server using the client.</p> <p>A <code>HomeDirectory</code> example is <code>/bucket_name/home/mydirectory</code>.</p>"
},
"HomeDirectoryMappings":{
"shape":"HomeDirectoryMappings",
"documentation":"<p>Logical directory mappings that specify what Amazon S3 or Amazon EFS paths and keys should be visible to your user and how you want to make them visible. You must specify the <code>Entry</code> and <code>Target</code> pair, where <code>Entry</code> shows how the path is made visible and <code>Target</code> is the actual Amazon S3 or Amazon EFS path. If you only specify a target, it is displayed as is. You also must ensure that your Identity and Access Management (IAM) role provides access to paths in <code>Target</code>. This value can be set only when <code>HomeDirectoryType</code> is set to <i>LOGICAL</i>.</p> <p>In most cases, you can use this value instead of the session policy to lock your user down to the designated home directory (\"<code>chroot</code>\"). To do this, you can set <code>Entry</code> to '/' and set <code>Target</code> to the HomeDirectory parameter value.</p>"
},
"HomeDirectoryType":{
"shape":"HomeDirectoryType",
"documentation":"<p>The type of landing directory (folder) that you want your users' home directory to be when they log in to the server. If you set it to <code>PATH</code>, the user will see the absolute Amazon S3 bucket or EFS paths as is in their file transfer protocol clients. If you set it <code>LOGICAL</code>, you need to provide mappings in the <code>HomeDirectoryMappings</code> for how you want to make Amazon S3 or Amazon EFS paths visible to your users.</p>"
},
"Policy":{
"shape":"Policy",
"documentation":"<p>A session policy for your user so that you can use the same Identity and Access Management (IAM) role across multiple users. This policy scopes down a user's access to portions of their Amazon S3 bucket. Variables that you can use inside this policy include <code>${Transfer:UserName}</code>, <code>${Transfer:HomeDirectory}</code>, and <code>${Transfer:HomeBucket}</code>.</p>"
},
"PosixProfile":{
"shape":"PosixProfile",
"documentation":"<p>Specifies the full POSIX identity, including user ID (<code>Uid</code>), group ID (<code>Gid</code>), and any secondary groups IDs (<code>SecondaryGids</code>), that controls your users' access to your Amazon Elastic File System (Amazon EFS) file systems. The POSIX permissions that are set on files and directories in your file system determine the level of access your users get when transferring files into and out of your Amazon EFS file systems.</p>"
},
"Role":{
"shape":"Role",
"documentation":"<p>The Amazon Resource Name (ARN) of the Identity and Access Management (IAM) role that controls your users' access to your Amazon S3 bucket or Amazon EFS file system. The policies attached to this role determine the level of access that you want to provide your users when transferring files into and out of your Amazon S3 bucket or Amazon EFS file system. The IAM role should also contain a trust relationship that allows the server to access your resources when servicing your users' transfer requests.</p>"
},
"SshPublicKeys":{
"shape":"SshPublicKeys",
"documentation":"<p>Specifies the public key portion of the Secure Shell (SSH) keys stored for the described user.</p>"
},
"Tags":{
"shape":"Tags",
"documentation":"<p>Specifies the key-value pairs for the user requested. Tag can be used to search for and group users for a variety of purposes.</p>"
},
"UserName":{
"shape":"UserName",
"documentation":"<p>Specifies the name of the user that was requested to be described. User names are used for authentication purposes. This is the string that will be used by your user when they log in to your server.</p>"
}
},
"documentation":"<p>Describes the properties of a user that was specified.</p>"
},
"DescribedWorkflow":{
"type":"structure",
"required":["Arn"],
"members":{
"Arn":{
"shape":"Arn",
"documentation":"<p>Specifies the unique Amazon Resource Name (ARN) for the workflow.</p>"
},
"Description":{
"shape":"WorkflowDescription",
"documentation":"<p>Specifies the text description for the workflow.</p>"
},
"Steps":{
"shape":"WorkflowSteps",
"documentation":"<p>Specifies the details for the steps that are in the specified workflow.</p>"
},
"OnExceptionSteps":{
"shape":"WorkflowSteps",
"documentation":"<p>Specifies the steps (actions) to take if errors are encountered during execution of the workflow.</p>"
},
"WorkflowId":{
"shape":"WorkflowId",
"documentation":"<p>A unique identifier for the workflow.</p>"
},
"Tags":{
"shape":"Tags",
"documentation":"<p>Key-value pairs that can be used to group and search for workflows. Tags are metadata attached to workflows for any purpose.</p>"
}
},
"documentation":"<p>Describes the properties of the specified workflow</p>"
},
"Description":{
"type":"string",
"max":200,
"min":1,
"pattern":"^[\\p{Graph}]+"
},
"DirectoryId":{
"type":"string",
"max":12,
"min":12,
"pattern":"^d-[0-9a-f]{10}$"
},
"Domain":{
"type":"string",
"enum":[
"S3",
"EFS"
]
},
"EfsFileLocation":{
"type":"structure",
"members":{
"FileSystemId":{
"shape":"EfsFileSystemId",
"documentation":"<p>The identifier of the file system, assigned by Amazon EFS.</p>"
},
"Path":{
"shape":"EfsPath",
"documentation":"<p>The pathname for the folder being used by a workflow.</p>"
}
},
"documentation":"<p>Reserved for future use.</p> <p> </p>"
},
"EfsFileSystemId":{
"type":"string",
"max":128,
"pattern":"^(arn:aws[-a-z]*:elasticfilesystem:[0-9a-z-:]+:(access-point/fsap|file-system/fs)-[0-9a-f]{8,40}|fs(ap)?-[0-9a-f]{8,40})$"
},
"EfsPath":{
"type":"string",
"max":65536,
"min":1,
"pattern":"^[^\\x00]+$"
},
"EncryptionAlg":{
"type":"string",
"enum":[
"AES128_CBC",
"AES192_CBC",
"AES256_CBC",
"NONE"
]
},
"EndpointDetails":{
"type":"structure",
"members":{
"AddressAllocationIds":{
"shape":"AddressAllocationIds",
"documentation":"<p>A list of address allocation IDs that are required to attach an Elastic IP address to your server's endpoint.</p> <note> <p>This property can only be set when <code>EndpointType</code> is set to <code>VPC</code> and it is only valid in the <code>UpdateServer</code> API.</p> </note>"
},
"SubnetIds":{
"shape":"SubnetIds",
"documentation":"<p>A list of subnet IDs that are required to host your server endpoint in your VPC.</p> <note> <p>This property can only be set when <code>EndpointType</code> is set to <code>VPC</code>.</p> </note>"
},
"VpcEndpointId":{
"shape":"VpcEndpointId",
"documentation":"<p>The identifier of the VPC endpoint.</p> <note> <p>This property can only be set when <code>EndpointType</code> is set to <code>VPC_ENDPOINT</code>.</p> <p>For more information, see https://docs.aws.amazon.com/transfer/latest/userguide/create-server-in-vpc.html#deprecate-vpc-endpoint.</p> </note>"
},
"VpcId":{
"shape":"VpcId",
"documentation":"<p>The VPC identifier of the VPC in which a server's endpoint will be hosted.</p> <note> <p>This property can only be set when <code>EndpointType</code> is set to <code>VPC</code>.</p> </note>"
},
"SecurityGroupIds":{
"shape":"SecurityGroupIds",
"documentation":"<p>A list of security groups IDs that are available to attach to your server's endpoint.</p> <note> <p>This property can only be set when <code>EndpointType</code> is set to <code>VPC</code>.</p> <p>You can edit the <code>SecurityGroupIds</code> property in the <a href=\"https://docs.aws.amazon.com/transfer/latest/userguide/API_UpdateServer.html\">UpdateServer</a> API only if you are changing the <code>EndpointType</code> from <code>PUBLIC</code> or <code>VPC_ENDPOINT</code> to <code>VPC</code>. To change security groups associated with your server's VPC endpoint after creation, use the Amazon EC2 <a href=\"https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_ModifyVpcEndpoint.html\">ModifyVpcEndpoint</a> API.</p> </note>"
}
},
"documentation":"<p>The virtual private cloud (VPC) endpoint settings that are configured for your file transfer protocol-enabled server. With a VPC endpoint, you can restrict access to your server and resources only within your VPC. To control incoming internet traffic, invoke the <code>UpdateServer</code> API and attach an Elastic IP address to your server's endpoint.</p> <note> <p> After May 19, 2021, you won't be able to create a server using <code>EndpointType=VPC_ENDPOINT</code> in your Amazon Web Servicesaccount if your account hasn't already done so before May 19, 2021. If you have already created servers with <code>EndpointType=VPC_ENDPOINT</code> in your Amazon Web Servicesaccount on or before May 19, 2021, you will not be affected. After this date, use <code>EndpointType</code>=<code>VPC</code>.</p> <p>For more information, see https://docs.aws.amazon.com/transfer/latest/userguide/create-server-in-vpc.html#deprecate-vpc-endpoint.</p> </note>"
},
"EndpointType":{
"type":"string",
"enum":[
"PUBLIC",
"VPC",
"VPC_ENDPOINT"
]
},
"ExecutionError":{
"type":"structure",
"required":[
"Type",
"Message"
],
"members":{
"Type":{
"shape":"ExecutionErrorType",
"documentation":"<p>Specifies the error type.</p> <ul> <li> <p> <code>ALREADY_EXISTS</code>: occurs for a copy step, if the overwrite option is not selected and a file with the same name already exists in the target location.</p> </li> <li> <p> <code>BAD_REQUEST</code>: a general bad request: for example, a step that attempts to tag an EFS file returns <code>BAD_REQUEST</code>, as only S3 files can be tagged.</p> </li> <li> <p> <code>CUSTOM_STEP_FAILED</code>: occurs when the custom step provided a callback that indicates failure.</p> </li> <li> <p> <code>INTERNAL_SERVER_ERROR</code>: a catch-all error that can occur for a variety of reasons.</p> </li> <li> <p> <code>NOT_FOUND</code>: occurs when a requested entity, for example a source file for a copy step, does not exist.</p> </li> <li> <p> <code>PERMISSION_DENIED</code>: occurs if your policy does not contain the correct permissions to complete one or more of the steps in the workflow.</p> </li> <li> <p> <code>TIMEOUT</code>: occurs when the execution times out.</p> <note> <p> You can set the <code>TimeoutSeconds</code> for a custom step, anywhere from 1 second to 1800 seconds (30 minutes). </p> </note> </li> <li> <p> <code>THROTTLED</code>: occurs if you exceed the new execution refill rate of one workflow per second.</p> </li> </ul>"
},
"Message":{
"shape":"ExecutionErrorMessage",
"documentation":"<p>Specifies the descriptive message that corresponds to the <code>ErrorType</code>.</p>"
}
},
"documentation":"<p>Specifies the error message and type, for an error that occurs during the execution of the workflow.</p>"
},
"ExecutionErrorMessage":{"type":"string"},
"ExecutionErrorType":{
"type":"string",
"enum":[
"PERMISSION_DENIED",
"CUSTOM_STEP_FAILED",
"THROTTLED",
"ALREADY_EXISTS",
"NOT_FOUND",
"BAD_REQUEST",
"TIMEOUT",
"INTERNAL_SERVER_ERROR"
]
},
"ExecutionId":{
"type":"string",
"max":36,
"min":36,
"pattern":"^[0-9a-fA-F]{8}\\-[0-9a-fA-F]{4}\\-[0-9a-fA-F]{4}\\-[0-9a-fA-F]{4}\\-[0-9a-fA-F]{12}$"
},
"ExecutionResults":{
"type":"structure",
"members":{
"Steps":{
"shape":"ExecutionStepResults",
"documentation":"<p>Specifies the details for the steps that are in the specified workflow.</p>"
},
"OnExceptionSteps":{
"shape":"ExecutionStepResults",
"documentation":"<p>Specifies the steps (actions) to take if errors are encountered during execution of the workflow.</p>"
}
},
"documentation":"<p>Specifies the steps in the workflow, as well as the steps to execute in case of any errors during workflow execution.</p>"
},
"ExecutionStatus":{
"type":"string",
"enum":[
"IN_PROGRESS",
"COMPLETED",
"EXCEPTION",
"HANDLING_EXCEPTION"
]
},
"ExecutionStepResult":{
"type":"structure",
"members":{
"StepType":{
"shape":"WorkflowStepType",
"documentation":"<p>One of the available step types.</p> <ul> <li> <p> <i>COPY</i>: Copy the file to another location.</p> </li> <li> <p> <i>CUSTOM</i>: Perform a custom step with an Lambda function target.</p> </li> <li> <p> <i>DELETE</i>: Delete the file.</p> </li> <li> <p> <i>TAG</i>: Add a tag to the file.</p> </li> </ul>"
},
"Outputs":{
"shape":"StepResultOutputsJson",
"documentation":"<p>The values for the key/value pair applied as a tag to the file. Only applicable if the step type is <code>TAG</code>.</p>"
},
"Error":{
"shape":"ExecutionError",
"documentation":"<p>Specifies the details for an error, if it occurred during execution of the specified workflow step.</p>"
}
},
"documentation":"<p>Specifies the following details for the step: error (if any), outputs (if any), and the step type.</p>"
},
"ExecutionStepResults":{
"type":"list",
"member":{"shape":"ExecutionStepResult"},
"max":50,
"min":1
},
"ExternalId":{
"type":"string",
"max":256,
"min":1,
"pattern":"^S-1-[\\d-]+$"
},
"FileLocation":{
"type":"structure",
"members":{
"S3FileLocation":{
"shape":"S3FileLocation",
"documentation":"<p>Specifies the S3 details for the file being used, such as bucket, ETag, and so forth.</p>"
},
"EfsFileLocation":{
"shape":"EfsFileLocation",
"documentation":"<p>Specifies the Amazon EFS identifier and the path for the file being used.</p>"
}
},
"documentation":"<p>Specifies the Amazon S3 or EFS file details to be used in the step.</p>"
},
"FilePath":{
"type":"string",
"max":1024,
"min":1,
"pattern":"^(.)+"
},
"FilePaths":{
"type":"list",
"member":{"shape":"FilePath"},
"max":10,
"min":1
},
"Fips":{"type":"boolean"},
"Function":{
"type":"string",
"max":170,
"min":1,
"pattern":"^arn:[a-z-]+:lambda:.*$"
},
"HomeDirectory":{
"type":"string",
"max":1024,
"pattern":"^$|/.*"
},
"HomeDirectoryMapEntry":{
"type":"structure",
"required":[
"Entry",
"Target"
],
"members":{
"Entry":{
"shape":"MapEntry",
"documentation":"<p>Represents an entry for <code>HomeDirectoryMappings</code>.</p>"
},
"Target":{
"shape":"MapTarget",
"documentation":"<p>Represents the map target that is used in a <code>HomeDirectorymapEntry</code>.</p>"
}
},
"documentation":"<p>Represents an object that contains entries and targets for <code>HomeDirectoryMappings</code>.</p> <p>The following is an <code>Entry</code> and <code>Target</code> pair example for <code>chroot</code>.</p> <p> <code>[ { \"Entry\": \"/\", \"Target\": \"/bucket_name/home/mydirectory\" } ]</code> </p>"
},
"HomeDirectoryMappings":{
"type":"list",
"member":{"shape":"HomeDirectoryMapEntry"},
"max":50,
"min":1
},
"HomeDirectoryType":{
"type":"string",
"enum":[
"PATH",
"LOGICAL"
]
},
"HostKey":{
"type":"string",
"max":4096,
"sensitive":true
},
"HostKeyDescription":{
"type":"string",
"max":200,
"min":0,
"pattern":"^[\\p{Print}]*$"
},
"HostKeyFingerprint":{"type":"string"},
"HostKeyId":{
"type":"string",
"max":25,
"min":25,
"pattern":"^hostkey-[0-9a-f]{17}$"
},
"HostKeyType":{"type":"string"},
"IdentityProviderDetails":{
"type":"structure",
"members":{
"Url":{
"shape":"Url",
"documentation":"<p>Provides the location of the service endpoint used to authenticate users.</p>"
},
"InvocationRole":{
"shape":"Role",
"documentation":"<p>Provides the type of <code>InvocationRole</code> used to authenticate the user account.</p>"
},
"DirectoryId":{
"shape":"DirectoryId",
"documentation":"<p>The identifier of the Directory Service directory that you want to stop sharing.</p>"
},
"Function":{
"shape":"Function",
"documentation":"<p>The ARN for a lambda function to use for the Identity provider.</p>"
}
},
"documentation":"<p>Returns information related to the type of user authentication that is in use for a file transfer protocol-enabled server's users. A server can have only one method of authentication.</p>"
},
"IdentityProviderType":{
"type":"string",
"documentation":"<p>Returns information related to the type of user authentication that is in use for a file transfer protocol-enabled server's users. For <code>AWS_DIRECTORY_SERVICE</code> or <code>SERVICE_MANAGED</code> authentication, the Secure Shell (SSH) public keys are stored with a user on the server instance. For <code>API_GATEWAY</code> authentication, your custom authentication method is implemented by using an API call. The server can have only one method of authentication.</p>",
"enum":[
"SERVICE_MANAGED",
"API_GATEWAY",
"AWS_DIRECTORY_SERVICE",
"AWS_LAMBDA"
]
},
"ImportCertificateRequest":{
"type":"structure",
"required":[
"Usage",
"Certificate"
],
"members":{
"Usage":{
"shape":"CertificateUsageType",
"documentation":"<p>Specifies whether this certificate is used for signing or encryption.</p>"
},
"Certificate":{
"shape":"CertificateBodyType",
"documentation":"<p>The file that contains the certificate to import.</p>"
},
"CertificateChain":{
"shape":"CertificateChainType",
"documentation":"<p>An optional list of certificates that make up the chain for the certificate that's being imported.</p>"
},
"PrivateKey":{
"shape":"PrivateKeyType",
"documentation":"<p>The file that contains the private key for the certificate that's being imported.</p>"
},
"ActiveDate":{
"shape":"CertDate",
"documentation":"<p>An optional date that specifies when the certificate becomes active.</p>"
},
"InactiveDate":{
"shape":"CertDate",
"documentation":"<p>An optional date that specifies when the certificate becomes inactive.</p>"
},
"Description":{
"shape":"Description",
"documentation":"<p>A short description that helps identify the certificate. </p>"
},
"Tags":{
"shape":"Tags",
"documentation":"<p>Key-value pairs that can be used to group and search for certificates.</p>"
}
}
},
"ImportCertificateResponse":{
"type":"structure",
"required":["CertificateId"],
"members":{
"CertificateId":{
"shape":"CertificateId",
"documentation":"<p>An array of identifiers for the imported certificates. You use this identifier for working with profiles and partner profiles.</p>"
}
}
},
"ImportHostKeyRequest":{
"type":"structure",
"required":[
"ServerId",
"HostKeyBody"
],
"members":{
"ServerId":{
"shape":"ServerId",
"documentation":"<p>The identifier of the server that contains the host key that you are importing.</p>"
},
"HostKeyBody":{
"shape":"HostKey",
"documentation":"<p>The public key portion of an SSH key pair.</p> <p>Transfer Family accepts RSA, ECDSA, and ED25519 keys.</p>"
},
"Description":{
"shape":"HostKeyDescription",
"documentation":"<p>The text description that identifies this host key.</p>"
},
"Tags":{
"shape":"Tags",
"documentation":"<p>Key-value pairs that can be used to group and search for host keys.</p>"
}
}
},
"ImportHostKeyResponse":{
"type":"structure",
"required":[
"ServerId",
"HostKeyId"
],
"members":{
"ServerId":{
"shape":"ServerId",
"documentation":"<p>Returns the server identifier that contains the imported key.</p>"
},
"HostKeyId":{
"shape":"HostKeyId",
"documentation":"<p>Returns the host key identifier for the imported key.</p>"
}
}
},
"ImportSshPublicKeyRequest":{
"type":"structure",
"required":[
"ServerId",
"SshPublicKeyBody",
"UserName"
],
"members":{
"ServerId":{
"shape":"ServerId",
"documentation":"<p>A system-assigned unique identifier for a server.</p>"
},
"SshPublicKeyBody":{
"shape":"SshPublicKeyBody",
"documentation":"<p>The public key portion of an SSH key pair.</p> <p>Transfer Family accepts RSA, ECDSA, and ED25519 keys.</p>"
},
"UserName":{
"shape":"UserName",
"documentation":"<p>The name of the user account that is assigned to one or more servers.</p>"
}
}
},
"ImportSshPublicKeyResponse":{
"type":"structure",
"required":[
"ServerId",
"SshPublicKeyId",
"UserName"
],
"members":{
"ServerId":{
"shape":"ServerId",
"documentation":"<p>A system-assigned unique identifier for a server.</p>"
},
"SshPublicKeyId":{
"shape":"SshPublicKeyId",
"documentation":"<p>The name given to a public key by the system that was imported.</p>"
},
"UserName":{
"shape":"UserName",
"documentation":"<p>A user name assigned to the <code>ServerID</code> value that you specified.</p>"
}
},
"documentation":"<p>Identifies the user, the server they belong to, and the identifier of the SSH public key associated with that user. A user can have more than one key on each server that they are associated with.</p>"
},
"InputFileLocation":{
"type":"structure",
"members":{
"S3FileLocation":{
"shape":"S3InputFileLocation",
"documentation":"<p>Specifies the details for the S3 file being copied.</p>"
},
"EfsFileLocation":{
"shape":"EfsFileLocation",
"documentation":"<p>Reserved for future use.</p>"
}
},
"documentation":"<p>Specifies the location for the file being copied. Only applicable for the Copy type of workflow steps.</p>"
},
"InternalServiceError":{
"type":"structure",
"required":["Message"],
"members":{
"Message":{"shape":"Message"}
},
"documentation":"<p>This exception is thrown when an error occurs in the Amazon Web ServicesTransfer Family service.</p>",
"exception":true,
"fault":true
},
"InvalidNextTokenException":{
"type":"structure",
"required":["Message"],
"members":{
"Message":{"shape":"Message"}
},
"documentation":"<p>The <code>NextToken</code> parameter that was passed is invalid.</p>",
"exception":true
},
"InvalidRequestException":{
"type":"structure",
"required":["Message"],
"members":{
"Message":{"shape":"Message"}
},
"documentation":"<p>This exception is thrown when the client submits a malformed request.</p>",
"exception":true
},
"ListAccessesRequest":{
"type":"structure",
"required":["ServerId"],
"members":{
"MaxResults":{
"shape":"MaxResults",
"documentation":"<p>Specifies the maximum number of access SIDs to return.</p>"
},
"NextToken":{
"shape":"NextToken",
"documentation":"<p>When you can get additional results from the <code>ListAccesses</code> call, a <code>NextToken</code> parameter is returned in the output. You can then pass in a subsequent command to the <code>NextToken</code> parameter to continue listing additional accesses.</p>"
},
"ServerId":{
"shape":"ServerId",
"documentation":"<p>A system-assigned unique identifier for a server that has users assigned to it.</p>"
}
}
},
"ListAccessesResponse":{
"type":"structure",
"required":[
"ServerId",
"Accesses"
],
"members":{
"NextToken":{
"shape":"NextToken",
"documentation":"<p>When you can get additional results from the <code>ListAccesses</code> call, a <code>NextToken</code> parameter is returned in the output. You can then pass in a subsequent command to the <code>NextToken</code> parameter to continue listing additional accesses.</p>"
},
"ServerId":{
"shape":"ServerId",
"documentation":"<p>A system-assigned unique identifier for a server that has users assigned to it.</p>"
},
"Accesses":{
"shape":"ListedAccesses",
"documentation":"<p>Returns the accesses and their properties for the <code>ServerId</code> value that you specify.</p>"
}
}
},
"ListAgreementsRequest":{
"type":"structure",
"required":["ServerId"],
"members":{
"MaxResults":{
"shape":"MaxResults",
"documentation":"<p>The maximum number of agreements to return.</p>"
},
"NextToken":{
"shape":"NextToken",
"documentation":"<p>When you can get additional results from the <code>ListAgreements</code> call, a <code>NextToken</code> parameter is returned in the output. You can then pass in a subsequent command to the <code>NextToken</code> parameter to continue listing additional agreements.</p>"
},
"ServerId":{
"shape":"ServerId",
"documentation":"<p>The identifier of the server for which you want a list of agreements.</p>"
}
}
},
"ListAgreementsResponse":{
"type":"structure",
"required":["Agreements"],
"members":{
"NextToken":{
"shape":"NextToken",
"documentation":"<p>Returns a token that you can use to call <code>ListAgreements</code> again and receive additional results, if there are any.</p>"
},
"Agreements":{
"shape":"ListedAgreements",
"documentation":"<p>Returns an array, where each item contains the details of an agreement.</p>"
}
}
},
"ListCertificatesRequest":{
"type":"structure",
"members":{
"MaxResults":{
"shape":"MaxResults",
"documentation":"<p>The maximum number of certificates to return.</p>"
},
"NextToken":{
"shape":"NextToken",
"documentation":"<p>When you can get additional results from the <code>ListCertificates</code> call, a <code>NextToken</code> parameter is returned in the output. You can then pass in a subsequent command to the <code>NextToken</code> parameter to continue listing additional certificates.</p>"
}
}
},
"ListCertificatesResponse":{
"type":"structure",
"required":["Certificates"],
"members":{
"NextToken":{
"shape":"NextToken",
"documentation":"<p>Returns the next token, which you can use to list the next certificate.</p>"
},
"Certificates":{
"shape":"ListedCertificates",
"documentation":"<p>Returns an array of the certificates that are specified in the <code>ListCertificates</code> call.</p>"
}
}
},
"ListConnectorsRequest":{
"type":"structure",
"members":{
"MaxResults":{
"shape":"MaxResults",
"documentation":"<p>The maximum number of connectors to return.</p>"
},
"NextToken":{
"shape":"NextToken",
"documentation":"<p>When you can get additional results from the <code>ListConnectors</code> call, a <code>NextToken</code> parameter is returned in the output. You can then pass in a subsequent command to the <code>NextToken</code> parameter to continue listing additional connectors.</p>"
}
}
},
"ListConnectorsResponse":{
"type":"structure",
"required":["Connectors"],
"members":{
"NextToken":{
"shape":"NextToken",
"documentation":"<p>Returns a token that you can use to call <code>ListConnectors</code> again and receive additional results, if there are any.</p>"
},
"Connectors":{
"shape":"ListedConnectors",
"documentation":"<p>Returns an array, where each item contains the details of a connector.</p>"
}
}
},
"ListExecutionsRequest":{
"type":"structure",
"required":["WorkflowId"],
"members":{
"MaxResults":{
"shape":"MaxResults",
"documentation":"<p>Specifies the maximum number of executions to return.</p>"
},
"NextToken":{
"shape":"NextToken",
"documentation":"<p> <code>ListExecutions</code> returns the <code>NextToken</code> parameter in the output. You can then pass the <code>NextToken</code> parameter in a subsequent command to continue listing additional executions.</p> <p> This is useful for pagination, for instance. If you have 100 executions for a workflow, you might only want to list first 10. If so, call the API by specifying the <code>max-results</code>: </p> <p> <code>aws transfer list-executions --max-results 10</code> </p> <p> This returns details for the first 10 executions, as well as the pointer (<code>NextToken</code>) to the eleventh execution. You can now call the API again, supplying the <code>NextToken</code> value you received: </p> <p> <code>aws transfer list-executions --max-results 10 --next-token $somePointerReturnedFromPreviousListResult</code> </p> <p> This call returns the next 10 executions, the 11th through the 20th. You can then repeat the call until the details for all 100 executions have been returned. </p>"
},
"WorkflowId":{
"shape":"WorkflowId",
"documentation":"<p>A unique identifier for the workflow.</p>"
}
}
},
"ListExecutionsResponse":{
"type":"structure",
"required":[
"WorkflowId",
"Executions"
],
"members":{
"NextToken":{
"shape":"NextToken",
"documentation":"<p> <code>ListExecutions</code> returns the <code>NextToken</code> parameter in the output. You can then pass the <code>NextToken</code> parameter in a subsequent command to continue listing additional executions.</p>"
},
"WorkflowId":{
"shape":"WorkflowId",
"documentation":"<p>A unique identifier for the workflow.</p>"
},
"Executions":{
"shape":"ListedExecutions",
"documentation":"<p>Returns the details for each execution.</p> <ul> <li> <p> <b>NextToken</b>: returned from a call to several APIs, you can use pass it to a subsequent command to continue listing additional executions.</p> </li> <li> <p> <b>StartTime</b>: timestamp indicating when the execution began.</p> </li> <li> <p> <b>Executions</b>: details of the execution, including the execution ID, initial file location, and Service metadata.</p> </li> <li> <p> <b>Status</b>: one of the following values: <code>IN_PROGRESS</code>, <code>COMPLETED</code>, <code>EXCEPTION</code>, <code>HANDLING_EXEPTION</code>. </p> </li> </ul>"
}
}
},
"ListHostKeysRequest":{
"type":"structure",
"required":["ServerId"],
"members":{
"MaxResults":{
"shape":"MaxResults",
"documentation":"<p>The maximum number of host keys to return.</p>"
},
"NextToken":{
"shape":"NextToken",
"documentation":"<p>When there are additional results that were not returned, a <code>NextToken</code> parameter is returned. You can use that value for a subsequent call to <code>ListHostKeys</code> to continue listing results.</p>"
},
"ServerId":{
"shape":"ServerId",
"documentation":"<p>The identifier of the server that contains the host keys that you want to view.</p>"
}
}
},
"ListHostKeysResponse":{
"type":"structure",
"required":[
"ServerId",
"HostKeys"
],
"members":{
"NextToken":{
"shape":"NextToken",
"documentation":"<p>Returns a token that you can use to call <code>ListHostKeys</code> again and receive additional results, if there are any.</p>"
},
"ServerId":{
"shape":"ServerId",
"documentation":"<p>Returns the server identifier that contains the listed host keys.</p>"
},
"HostKeys":{
"shape":"ListedHostKeys",
"documentation":"<p>Returns an array, where each item contains the details of a host key.</p>"
}
}
},
"ListProfilesRequest":{
"type":"structure",
"members":{
"MaxResults":{
"shape":"MaxResults",
"documentation":"<p>The maximum number of profiles to return.</p>"
},
"NextToken":{
"shape":"NextToken",
"documentation":"<p>When there are additional results that were not returned, a <code>NextToken</code> parameter is returned. You can use that value for a subsequent call to <code>ListProfiles</code> to continue listing results.</p>"
},
"ProfileType":{
"shape":"ProfileType",
"documentation":"<p>Indicates whether to list only <code>LOCAL</code> type profiles or only <code>PARTNER</code> type profiles. If not supplied in the request, the command lists all types of profiles.</p>"
}
}
},
"ListProfilesResponse":{
"type":"structure",
"required":["Profiles"],
"members":{
"NextToken":{
"shape":"NextToken",
"documentation":"<p>Returns a token that you can use to call <code>ListProfiles</code> again and receive additional results, if there are any.</p>"
},
"Profiles":{
"shape":"ListedProfiles",
"documentation":"<p>Returns an array, where each item contains the details of a profile.</p>"
}
}
},
"ListSecurityPoliciesRequest":{
"type":"structure",
"members":{
"MaxResults":{
"shape":"MaxResults",
"documentation":"<p>Specifies the number of security policies to return as a response to the <code>ListSecurityPolicies</code> query.</p>"
},
"NextToken":{
"shape":"NextToken",
"documentation":"<p>When additional results are obtained from the <code>ListSecurityPolicies</code> command, a <code>NextToken</code> parameter is returned in the output. You can then pass the <code>NextToken</code> parameter in a subsequent command to continue listing additional security policies.</p>"
}
}
},
"ListSecurityPoliciesResponse":{
"type":"structure",
"required":["SecurityPolicyNames"],
"members":{
"NextToken":{
"shape":"NextToken",
"documentation":"<p>When you can get additional results from the <code>ListSecurityPolicies</code> operation, a <code>NextToken</code> parameter is returned in the output. In a following command, you can pass in the <code>NextToken</code> parameter to continue listing security policies.</p>"
},
"SecurityPolicyNames":{
"shape":"SecurityPolicyNames",
"documentation":"<p>An array of security policies that were listed.</p>"
}
}
},
"ListServersRequest":{
"type":"structure",
"members":{
"MaxResults":{
"shape":"MaxResults",
"documentation":"<p>Specifies the number of servers to return as a response to the <code>ListServers</code> query.</p>"
},
"NextToken":{
"shape":"NextToken",
"documentation":"<p>When additional results are obtained from the <code>ListServers</code> command, a <code>NextToken</code> parameter is returned in the output. You can then pass the <code>NextToken</code> parameter in a subsequent command to continue listing additional servers.</p>"
}
}
},
"ListServersResponse":{
"type":"structure",
"required":["Servers"],
"members":{
"NextToken":{
"shape":"NextToken",
"documentation":"<p>When you can get additional results from the <code>ListServers</code> operation, a <code>NextToken</code> parameter is returned in the output. In a following command, you can pass in the <code>NextToken</code> parameter to continue listing additional servers.</p>"
},
"Servers":{
"shape":"ListedServers",
"documentation":"<p>An array of servers that were listed.</p>"
}
}
},
"ListTagsForResourceRequest":{
"type":"structure",
"required":["Arn"],
"members":{
"Arn":{
"shape":"Arn",
"documentation":"<p>Requests the tags associated with a particular Amazon Resource Name (ARN). An ARN is an identifier for a specific Amazon Web Services resource, such as a server, user, or role.</p>"
},
"MaxResults":{
"shape":"MaxResults",
"documentation":"<p>Specifies the number of tags to return as a response to the <code>ListTagsForResource</code> request.</p>"
},
"NextToken":{
"shape":"NextToken",
"documentation":"<p>When you request additional results from the <code>ListTagsForResource</code> operation, a <code>NextToken</code> parameter is returned in the input. You can then pass in a subsequent command to the <code>NextToken</code> parameter to continue listing additional tags.</p>"
}
}
},
"ListTagsForResourceResponse":{
"type":"structure",
"members":{
"Arn":{
"shape":"Arn",
"documentation":"<p>The ARN you specified to list the tags of.</p>"
},
"NextToken":{
"shape":"NextToken",
"documentation":"<p>When you can get additional results from the <code>ListTagsForResource</code> call, a <code>NextToken</code> parameter is returned in the output. You can then pass in a subsequent command to the <code>NextToken</code> parameter to continue listing additional tags.</p>"
},
"Tags":{
"shape":"Tags",
"documentation":"<p>Key-value pairs that are assigned to a resource, usually for the purpose of grouping and searching for items. Tags are metadata that you define.</p>"
}
}
},
"ListUsersRequest":{
"type":"structure",
"required":["ServerId"],
"members":{
"MaxResults":{
"shape":"MaxResults",
"documentation":"<p>Specifies the number of users to return as a response to the <code>ListUsers</code> request.</p>"
},
"NextToken":{
"shape":"NextToken",
"documentation":"<p>When you can get additional results from the <code>ListUsers</code> call, a <code>NextToken</code> parameter is returned in the output. You can then pass in a subsequent command to the <code>NextToken</code> parameter to continue listing additional users.</p>"
},
"ServerId":{
"shape":"ServerId",
"documentation":"<p>A system-assigned unique identifier for a server that has users assigned to it.</p>"
}
}
},
"ListUsersResponse":{
"type":"structure",
"required":[
"ServerId",
"Users"
],
"members":{
"NextToken":{
"shape":"NextToken",
"documentation":"<p>When you can get additional results from the <code>ListUsers</code> call, a <code>NextToken</code> parameter is returned in the output. You can then pass in a subsequent command to the <code>NextToken</code> parameter to continue listing additional users.</p>"
},
"ServerId":{
"shape":"ServerId",
"documentation":"<p>A system-assigned unique identifier for a server that the users are assigned to.</p>"
},
"Users":{
"shape":"ListedUsers",
"documentation":"<p>Returns the user accounts and their properties for the <code>ServerId</code> value that you specify.</p>"
}
}
},
"ListWorkflowsRequest":{
"type":"structure",
"members":{
"MaxResults":{
"shape":"MaxResults",
"documentation":"<p>Specifies the maximum number of workflows to return.</p>"
},
"NextToken":{
"shape":"NextToken",
"documentation":"<p> <code>ListWorkflows</code> returns the <code>NextToken</code> parameter in the output. You can then pass the <code>NextToken</code> parameter in a subsequent command to continue listing additional workflows.</p>"
}
}
},
"ListWorkflowsResponse":{
"type":"structure",
"required":["Workflows"],
"members":{
"NextToken":{
"shape":"NextToken",
"documentation":"<p> <code>ListWorkflows</code> returns the <code>NextToken</code> parameter in the output. You can then pass the <code>NextToken</code> parameter in a subsequent command to continue listing additional workflows.</p>"
},
"Workflows":{
"shape":"ListedWorkflows",
"documentation":"<p>Returns the <code>Arn</code>, <code>WorkflowId</code>, and <code>Description</code> for each workflow.</p>"
}
}
},
"ListedAccess":{
"type":"structure",
"members":{
"HomeDirectory":{
"shape":"HomeDirectory",
"documentation":"<p>The landing directory (folder) for a user when they log in to the server using the client.</p> <p>A <code>HomeDirectory</code> example is <code>/bucket_name/home/mydirectory</code>.</p>"
},
"HomeDirectoryType":{
"shape":"HomeDirectoryType",
"documentation":"<p>The type of landing directory (folder) that you want your users' home directory to be when they log in to the server. If you set it to <code>PATH</code>, the user will see the absolute Amazon S3 bucket or EFS paths as is in their file transfer protocol clients. If you set it <code>LOGICAL</code>, you need to provide mappings in the <code>HomeDirectoryMappings</code> for how you want to make Amazon S3 or Amazon EFS paths visible to your users.</p>"
},
"Role":{
"shape":"Role",
"documentation":"<p>The Amazon Resource Name (ARN) of the Identity and Access Management (IAM) role that controls your users' access to your Amazon S3 bucket or Amazon EFS file system. The policies attached to this role determine the level of access that you want to provide your users when transferring files into and out of your Amazon S3 bucket or Amazon EFS file system. The IAM role should also contain a trust relationship that allows the server to access your resources when servicing your users' transfer requests.</p>"
},
"ExternalId":{
"shape":"ExternalId",
"documentation":"<p>A unique identifier that is required to identify specific groups within your directory. The users of the group that you associate have access to your Amazon S3 or Amazon EFS resources over the enabled protocols using Transfer Family. If you know the group name, you can view the SID values by running the following command using Windows PowerShell.</p> <p> <code>Get-ADGroup -Filter {samAccountName -like \"<i>YourGroupName</i>*\"} -Properties * | Select SamAccountName,ObjectSid</code> </p> <p>In that command, replace <i>YourGroupName</i> with the name of your Active Directory group.</p> <p>The regular expression used to validate this parameter is a string of characters consisting of uppercase and lowercase alphanumeric characters with no spaces. You can also include underscores or any of the following characters: =,.@:/-</p>"
}
},
"documentation":"<p>Lists the properties for one or more specified associated accesses.</p>"
},
"ListedAccesses":{
"type":"list",
"member":{"shape":"ListedAccess"}
},
"ListedAgreement":{
"type":"structure",
"members":{
"Arn":{
"shape":"Arn",
"documentation":"<p>The Amazon Resource Name (ARN) of the specified agreement.</p>"
},
"AgreementId":{
"shape":"AgreementId",
"documentation":"<p>A unique identifier for the agreement. This identifier is returned when you create an agreement.</p>"
},
"Description":{
"shape":"Description",
"documentation":"<p>The current description for the agreement. You can change it by calling the <code>UpdateAgreement</code> operation and providing a new description. </p>"
},
"Status":{
"shape":"AgreementStatusType",
"documentation":"<p>The agreement can be either <code>ACTIVE</code> or <code>INACTIVE</code>.</p>"
},
"ServerId":{
"shape":"ServerId",
"documentation":"<p>The unique identifier for the agreement.</p>"
},
"LocalProfileId":{
"shape":"ProfileId",
"documentation":"<p>A unique identifier for the AS2 local profile.</p>"
},
"PartnerProfileId":{
"shape":"ProfileId",
"documentation":"<p>A unique identifier for the partner profile.</p>"
}
},
"documentation":"<p>Describes the properties of an agreement.</p>"
},
"ListedAgreements":{
"type":"list",
"member":{"shape":"ListedAgreement"}
},
"ListedCertificate":{
"type":"structure",
"members":{
"Arn":{
"shape":"Arn",
"documentation":"<p>The Amazon Resource Name (ARN) of the specified certificate.</p>"
},
"CertificateId":{
"shape":"CertificateId",
"documentation":"<p>An array of identifiers for the imported certificates. You use this identifier for working with profiles and partner profiles.</p>"
},
"Usage":{
"shape":"CertificateUsageType",
"documentation":"<p>Specifies whether this certificate is used for signing or encryption.</p>"
},
"Status":{
"shape":"CertificateStatusType",
"documentation":"<p>The certificate can be either <code>ACTIVE</code>, <code>PENDING_ROTATION</code>, or <code>INACTIVE</code>. <code>PENDING_ROTATION</code> means that this certificate will replace the current certificate when it expires.</p>"
},
"ActiveDate":{
"shape":"CertDate",
"documentation":"<p>An optional date that specifies when the certificate becomes active.</p>"
},
"InactiveDate":{
"shape":"CertDate",
"documentation":"<p>An optional date that specifies when the certificate becomes inactive.</p>"
},
"Type":{
"shape":"CertificateType",
"documentation":"<p>The type for the certificate. If a private key has been specified for the certificate, its type is <code>CERTIFICATE_WITH_PRIVATE_KEY</code>. If there is no private key, the type is <code>CERTIFICATE</code>.</p>"
},
"Description":{
"shape":"Description",
"documentation":"<p>The name or short description that's used to identify the certificate.</p>"
}
},
"documentation":"<p>Describes the properties of a certificate.</p>"
},
"ListedCertificates":{
"type":"list",
"member":{"shape":"ListedCertificate"}
},
"ListedConnector":{
"type":"structure",
"members":{
"Arn":{
"shape":"Arn",
"documentation":"<p>The Amazon Resource Name (ARN) of the specified connector.</p>"
},
"ConnectorId":{
"shape":"ConnectorId",
"documentation":"<p>The unique identifier for the connector.</p>"
},
"Url":{
"shape":"Url",
"documentation":"<p>The URL of the partner's AS2 endpoint.</p>"
}
},
"documentation":"<p>Returns details of the connector that is specified.</p>"
},
"ListedConnectors":{
"type":"list",
"member":{"shape":"ListedConnector"}
},
"ListedExecution":{
"type":"structure",
"members":{
"ExecutionId":{
"shape":"ExecutionId",
"documentation":"<p>A unique identifier for the execution of a workflow.</p>"
},
"InitialFileLocation":{
"shape":"FileLocation",
"documentation":"<p>A structure that describes the Amazon S3 or EFS file location. This is the file location when the execution begins: if the file is being copied, this is the initial (as opposed to destination) file location.</p>"
},
"ServiceMetadata":{
"shape":"ServiceMetadata",
"documentation":"<p>A container object for the session details that are associated with a workflow.</p>"
},
"Status":{
"shape":"ExecutionStatus",
"documentation":"<p>The status is one of the execution. Can be in progress, completed, exception encountered, or handling the exception.</p>"
}
},
"documentation":"<p>Returns properties of the execution that is specified.</p>"
},
"ListedExecutions":{
"type":"list",
"member":{"shape":"ListedExecution"}
},
"ListedHostKey":{
"type":"structure",
"required":["Arn"],
"members":{
"Arn":{
"shape":"Arn",
"documentation":"<p>The unique Amazon Resource Name (ARN) of the host key.</p>"
},
"HostKeyId":{
"shape":"HostKeyId",
"documentation":"<p>A unique identifier for the host key.</p>"
},
"Fingerprint":{
"shape":"HostKeyFingerprint",
"documentation":"<p>The public key fingerprint, which is a short sequence of bytes used to identify the longer public key.</p>"
},
"Description":{
"shape":"HostKeyDescription",
"documentation":"<p>The current description for the host key. You can change it by calling the <code>UpdateHostKey</code> operation and providing a new description.</p>"
},
"Type":{
"shape":"HostKeyType",
"documentation":"<p>The encryption algorithm that is used for the host key. The <code>Type</code> parameter is specified by using one of the following values:</p> <ul> <li> <p> <code>ssh-rsa</code> </p> </li> <li> <p> <code>ssh-ed25519</code> </p> </li> <li> <p> <code>ecdsa-sha2-nistp256</code> </p> </li> <li> <p> <code>ecdsa-sha2-nistp384</code> </p> </li> <li> <p> <code>ecdsa-sha2-nistp521</code> </p> </li> </ul>"
},
"DateImported":{
"shape":"DateImported",
"documentation":"<p>The date on which the host key was added to the server.</p>"
}
},
"documentation":"<p>Returns properties of the host key that's specified.</p>"
},
"ListedHostKeys":{
"type":"list",
"member":{"shape":"ListedHostKey"}
},
"ListedProfile":{
"type":"structure",
"members":{
"Arn":{
"shape":"Arn",
"documentation":"<p>The Amazon Resource Name (ARN) of the specified profile.</p>"
},
"ProfileId":{
"shape":"ProfileId",
"documentation":"<p>A unique identifier for the local or partner AS2 profile.</p>"
},
"As2Id":{
"shape":"As2Id",
"documentation":"<p>The <code>As2Id</code> is the <i>AS2-name</i>, as defined in the <a href=\"https://datatracker.ietf.org/doc/html/rfc4130\">RFC 4130</a>. For inbound transfers, this is the <code>AS2-From</code> header for the AS2 messages sent from the partner. For outbound connectors, this is the <code>AS2-To</code> header for the AS2 messages sent to the partner using the <code>StartFileTransfer</code> API operation. This ID cannot include spaces.</p>"
},
"ProfileType":{
"shape":"ProfileType",
"documentation":"<p>Indicates whether to list only <code>LOCAL</code> type profiles or only <code>PARTNER</code> type profiles. If not supplied in the request, the command lists all types of profiles.</p>"
}
},
"documentation":"<p>Returns the properties of the profile that was specified.</p>"
},
"ListedProfiles":{
"type":"list",
"member":{"shape":"ListedProfile"}
},
"ListedServer":{
"type":"structure",
"required":["Arn"],
"members":{
"Arn":{
"shape":"Arn",
"documentation":"<p>Specifies the unique Amazon Resource Name (ARN) for a server to be listed.</p>"
},
"Domain":{
"shape":"Domain",
"documentation":"<p>Specifies the domain of the storage system that is used for file transfers.</p>"
},
"IdentityProviderType":{
"shape":"IdentityProviderType",
"documentation":"<p>The mode of authentication for a server. The default value is <code>SERVICE_MANAGED</code>, which allows you to store and access user credentials within the Transfer Family service.</p> <p>Use <code>AWS_DIRECTORY_SERVICE</code> to provide access to Active Directory groups in Directory Service for Microsoft Active Directory or Microsoft Active Directory in your on-premises environment or in Amazon Web Services using AD Connector. This option also requires you to provide a Directory ID by using the <code>IdentityProviderDetails</code> parameter.</p> <p>Use the <code>API_GATEWAY</code> value to integrate with an identity provider of your choosing. The <code>API_GATEWAY</code> setting requires you to provide an Amazon API Gateway endpoint URL to call for authentication by using the <code>IdentityProviderDetails</code> parameter.</p> <p>Use the <code>AWS_LAMBDA</code> value to directly use an Lambda function as your identity provider. If you choose this value, you must specify the ARN for the Lambda function in the <code>Function</code> parameter or the <code>IdentityProviderDetails</code> data type.</p>"
},
"EndpointType":{
"shape":"EndpointType",
"documentation":"<p>Specifies the type of VPC endpoint that your server is connected to. If your server is connected to a VPC endpoint, your server isn't accessible over the public internet.</p>"
},
"LoggingRole":{
"shape":"Role",
"documentation":"<p>The Amazon Resource Name (ARN) of the Identity and Access Management (IAM) role that allows a server to turn on Amazon CloudWatch logging for Amazon S3 or Amazon EFSevents. When set, you can view user activity in your CloudWatch logs.</p>"
},
"ServerId":{
"shape":"ServerId",
"documentation":"<p>Specifies the unique system assigned identifier for the servers that were listed.</p>"
},
"State":{
"shape":"State",
"documentation":"<p>The condition of the server that was described. A value of <code>ONLINE</code> indicates that the server can accept jobs and transfer files. A <code>State</code> value of <code>OFFLINE</code> means that the server cannot perform file transfer operations.</p> <p>The states of <code>STARTING</code> and <code>STOPPING</code> indicate that the server is in an intermediate state, either not fully able to respond, or not fully offline. The values of <code>START_FAILED</code> or <code>STOP_FAILED</code> can indicate an error condition.</p>"
},
"UserCount":{
"shape":"UserCount",
"documentation":"<p>Specifies the number of users that are assigned to a server you specified with the <code>ServerId</code>.</p>"
}
},
"documentation":"<p>Returns properties of a file transfer protocol-enabled server that was specified.</p>"
},
"ListedServers":{
"type":"list",
"member":{"shape":"ListedServer"}
},
"ListedUser":{
"type":"structure",
"required":["Arn"],
"members":{
"Arn":{
"shape":"Arn",
"documentation":"<p>Provides the unique Amazon Resource Name (ARN) for the user that you want to learn about.</p>"
},
"HomeDirectory":{
"shape":"HomeDirectory",
"documentation":"<p>The landing directory (folder) for a user when they log in to the server using the client.</p> <p>A <code>HomeDirectory</code> example is <code>/bucket_name/home/mydirectory</code>.</p>"
},
"HomeDirectoryType":{
"shape":"HomeDirectoryType",
"documentation":"<p>The type of landing directory (folder) that you want your users' home directory to be when they log in to the server. If you set it to <code>PATH</code>, the user will see the absolute Amazon S3 bucket or EFS paths as is in their file transfer protocol clients. If you set it <code>LOGICAL</code>, you need to provide mappings in the <code>HomeDirectoryMappings</code> for how you want to make Amazon S3 or Amazon EFS paths visible to your users.</p>"
},
"Role":{
"shape":"Role",
"documentation":"<p>The Amazon Resource Name (ARN) of the Identity and Access Management (IAM) role that controls your users' access to your Amazon S3 bucket or Amazon EFS file system. The policies attached to this role determine the level of access that you want to provide your users when transferring files into and out of your Amazon S3 bucket or Amazon EFS file system. The IAM role should also contain a trust relationship that allows the server to access your resources when servicing your users' transfer requests.</p> <note> <p>The IAM role that controls your users' access to your Amazon S3 bucket for servers with <code>Domain=S3</code>, or your EFS file system for servers with <code>Domain=EFS</code>. </p> <p>The policies attached to this role determine the level of access you want to provide your users when transferring files into and out of your S3 buckets or EFS file systems.</p> </note>"
},
"SshPublicKeyCount":{
"shape":"SshPublicKeyCount",
"documentation":"<p>Specifies the number of SSH public keys stored for the user you specified.</p>"
},
"UserName":{
"shape":"UserName",
"documentation":"<p>Specifies the name of the user whose ARN was specified. User names are used for authentication purposes.</p>"
}
},
"documentation":"<p>Returns properties of the user that you specify.</p>"
},
"ListedUsers":{
"type":"list",
"member":{"shape":"ListedUser"}
},
"ListedWorkflow":{
"type":"structure",
"members":{
"WorkflowId":{
"shape":"WorkflowId",
"documentation":"<p>A unique identifier for the workflow.</p>"
},
"Description":{
"shape":"WorkflowDescription",
"documentation":"<p>Specifies the text description for the workflow.</p>"
},
"Arn":{
"shape":"Arn",
"documentation":"<p>Specifies the unique Amazon Resource Name (ARN) for the workflow.</p>"
}
},
"documentation":"<p>Contains the identifier, text description, and Amazon Resource Name (ARN) for the workflow.</p>"
},
"ListedWorkflows":{
"type":"list",
"member":{"shape":"ListedWorkflow"}
},
"LogGroupName":{
"type":"string",
"max":512,
"min":1,
"pattern":"[\\.\\-_/#A-Za-z0-9]*"
},
"LoggingConfiguration":{
"type":"structure",
"members":{
"LoggingRole":{
"shape":"Role",
"documentation":"<p>The Amazon Resource Name (ARN) of the Identity and Access Management (IAM) role that allows a server to turn on Amazon CloudWatch logging for Amazon S3 or Amazon EFSevents. When set, you can view user activity in your CloudWatch logs.</p>"
},
"LogGroupName":{
"shape":"LogGroupName",
"documentation":"<p>The name of the CloudWatch logging group for the Transfer Family server to which this workflow belongs.</p>"
}
},
"documentation":"<p>Consists of the logging role and the log group name.</p>"
},
"MapEntry":{
"type":"string",
"max":1024,
"pattern":"^/.*"
},
"MapTarget":{
"type":"string",
"max":1024,
"pattern":"^/.*"
},
"MaxResults":{
"type":"integer",
"max":1000,
"min":1
},
"MdnResponse":{
"type":"string",
"enum":[
"SYNC",
"NONE"
]
},
"MdnSigningAlg":{
"type":"string",
"enum":[
"SHA256",
"SHA384",
"SHA512",
"SHA1",
"NONE",
"DEFAULT"
]
},
"Message":{"type":"string"},
"MessageSubject":{
"type":"string",
"max":1024,
"min":1,
"pattern":"^[\\p{Print}\\p{Blank}]+"
},
"NextToken":{
"type":"string",
"max":6144,
"min":1
},
"NullableRole":{
"type":"string",
"max":2048,
"pattern":"^$|arn:.*role/.*"
},
"OnPartialUploadWorkflowDetails":{
"type":"list",
"member":{"shape":"WorkflowDetail"},
"max":1
},
"OnUploadWorkflowDetails":{
"type":"list",
"member":{"shape":"WorkflowDetail"},
"max":1
},
"OverwriteExisting":{
"type":"string",
"enum":[
"TRUE",
"FALSE"
]
},
"PassiveIp":{
"type":"string",
"max":15
},
"Policy":{
"type":"string",
"max":2048
},
"PosixId":{
"type":"long",
"max":4294967295,
"min":0
},
"PosixProfile":{
"type":"structure",
"required":[
"Uid",
"Gid"
],
"members":{
"Uid":{
"shape":"PosixId",
"documentation":"<p>The POSIX user ID used for all EFS operations by this user.</p>"
},
"Gid":{
"shape":"PosixId",
"documentation":"<p>The POSIX group ID used for all EFS operations by this user.</p>"
},
"SecondaryGids":{
"shape":"SecondaryGids",
"documentation":"<p>The secondary POSIX group IDs used for all EFS operations by this user.</p>"
}
},
"documentation":"<p>The full POSIX identity, including user ID (<code>Uid</code>), group ID (<code>Gid</code>), and any secondary groups IDs (<code>SecondaryGids</code>), that controls your users' access to your Amazon EFS file systems. The POSIX permissions that are set on files and directories in your file system determine the level of access your users get when transferring files into and out of your Amazon EFS file systems.</p>"
},
"PostAuthenticationLoginBanner":{
"type":"string",
"max":512,
"pattern":"[\\x09-\\x0D\\x20-\\x7E]*"
},
"PreAuthenticationLoginBanner":{
"type":"string",
"max":512,
"pattern":"[\\x09-\\x0D\\x20-\\x7E]*"
},
"PrivateKeyType":{
"type":"string",
"max":16384,
"min":1,
"pattern":"^[\\u0009\\u000A\\u000D\\u0020-\\u00FF]*",
"sensitive":true
},
"ProfileId":{
"type":"string",
"max":19,
"min":19,
"pattern":"^p-([0-9a-f]{17})$"
},
"ProfileType":{
"type":"string",
"enum":[
"LOCAL",
"PARTNER"
]
},
"Protocol":{
"type":"string",
"enum":[
"SFTP",
"FTP",
"FTPS",
"AS2"
]
},
"ProtocolDetails":{
"type":"structure",
"members":{
"PassiveIp":{
"shape":"PassiveIp",
"documentation":"<p> Indicates passive mode, for FTP and FTPS protocols. Enter a single IPv4 address, such as the public IP address of a firewall, router, or load balancer. For example: </p> <p> <code>aws transfer update-server --protocol-details PassiveIp=0.0.0.0</code> </p> <p>Replace <code>0.0.0.0</code> in the example above with the actual IP address you want to use.</p> <note> <p> If you change the <code>PassiveIp</code> value, you must stop and then restart your Transfer Family server for the change to take effect. For details on using passive mode (PASV) in a NAT environment, see <a href=\"http://aws.amazon.com/blogs/storage/configuring-your-ftps-server-behind-a-firewall-or-nat-with-aws-transfer-family/\">Configuring your FTPS server behind a firewall or NAT with Transfer Family</a>. </p> </note> <p> <i>Special values</i> </p> <p>The <code>AUTO</code> and <code>0.0.0.0</code> are special values for the <code>PassiveIp</code> parameter. The value <code>PassiveIp=AUTO</code> is assigned by default to FTP and FTPS type servers. In this case, the server automatically responds with one of the endpoint IPs within the PASV response. <code>PassiveIp=0.0.0.0</code> has a more unique application for its usage. For example, if you have a High Availability (HA) Network Load Balancer (NLB) environment, where you have 3 subnets, you can only specify a single IP address using the <code>PassiveIp</code> parameter. This reduces the effectiveness of having High Availability. In this case, you can specify <code>PassiveIp=0.0.0.0</code>. This tells the client to use the same IP address as the Control connection and utilize all AZs for their connections. Note, however, that not all FTP clients support the <code>PassiveIp=0.0.0.0</code> response. FileZilla and WinSCP do support it. If you are using other clients, check to see if your client supports the <code>PassiveIp=0.0.0.0</code> response.</p>"
},
"TlsSessionResumptionMode":{
"shape":"TlsSessionResumptionMode",
"documentation":"<p>A property used with Transfer Family servers that use the FTPS protocol. TLS Session Resumption provides a mechanism to resume or share a negotiated secret key between the control and data connection for an FTPS session. <code>TlsSessionResumptionMode</code> determines whether or not the server resumes recent, negotiated sessions through a unique session ID. This property is available during <code>CreateServer</code> and <code>UpdateServer</code> calls. If a <code>TlsSessionResumptionMode</code> value is not specified during <code>CreateServer</code>, it is set to <code>ENFORCED</code> by default.</p> <ul> <li> <p> <code>DISABLED</code>: the server does not process TLS session resumption client requests and creates a new TLS session for each request. </p> </li> <li> <p> <code>ENABLED</code>: the server processes and accepts clients that are performing TLS session resumption. The server doesn't reject client data connections that do not perform the TLS session resumption client processing.</p> </li> <li> <p> <code>ENFORCED</code>: the server processes and accepts clients that are performing TLS session resumption. The server rejects client data connections that do not perform the TLS session resumption client processing. Before you set the value to <code>ENFORCED</code>, test your clients.</p> <note> <p>Not all FTPS clients perform TLS session resumption. So, if you choose to enforce TLS session resumption, you prevent any connections from FTPS clients that don't perform the protocol negotiation. To determine whether or not you can use the <code>ENFORCED</code> value, you need to test your clients.</p> </note> </li> </ul>"
},
"SetStatOption":{
"shape":"SetStatOption",
"documentation":"<p>Use the <code>SetStatOption</code> to ignore the error that is generated when the client attempts to use <code>SETSTAT</code> on a file you are uploading to an S3 bucket.</p> <p>Some SFTP file transfer clients can attempt to change the attributes of remote files, including timestamp and permissions, using commands, such as <code>SETSTAT</code> when uploading the file. However, these commands are not compatible with object storage systems, such as Amazon S3. Due to this incompatibility, file uploads from these clients can result in errors even when the file is otherwise successfully uploaded.</p> <p>Set the value to <code>ENABLE_NO_OP</code> to have the Transfer Family server ignore the <code>SETSTAT</code> command, and upload files without needing to make any changes to your SFTP client. While the <code>SetStatOption</code> <code>ENABLE_NO_OP</code> setting ignores the error, it does generate a log entry in Amazon CloudWatch Logs, so you can determine when the client is making a <code>SETSTAT</code> call.</p> <note> <p>If you want to preserve the original timestamp for your file, and modify other file attributes using <code>SETSTAT</code>, you can use Amazon EFS as backend storage with Transfer Family.</p> </note>"
},
"As2Transports":{
"shape":"As2Transports",
"documentation":"<p>Indicates the transport method for the AS2 messages. Currently, only HTTP is supported.</p>"
}
},
"documentation":"<p> The protocol settings that are configured for your server. </p>"
},
"Protocols":{
"type":"list",
"member":{"shape":"Protocol"},
"max":4,
"min":1
},
"Resource":{"type":"string"},
"ResourceExistsException":{
"type":"structure",
"required":[
"Message",
"Resource",
"ResourceType"
],
"members":{
"Message":{"shape":"Message"},
"Resource":{"shape":"Resource"},
"ResourceType":{"shape":"ResourceType"}
},
"documentation":"<p>The requested resource does not exist.</p>",
"exception":true
},
"ResourceNotFoundException":{
"type":"structure",
"required":[
"Message",
"Resource",
"ResourceType"
],
"members":{
"Message":{"shape":"Message"},
"Resource":{"shape":"Resource"},
"ResourceType":{"shape":"ResourceType"}
},
"documentation":"<p>This exception is thrown when a resource is not found by the Amazon Web ServicesTransfer Family service.</p>",
"exception":true
},
"ResourceType":{"type":"string"},
"Response":{"type":"string"},
"RetryAfterSeconds":{"type":"string"},
"Role":{
"type":"string",
"max":2048,
"min":20,
"pattern":"arn:.*role/.*"
},
"S3Bucket":{
"type":"string",
"max":63,
"min":3,
"pattern":"^[a-z0-9][\\.\\-a-z0-9]{1,61}[a-z0-9]$"
},
"S3Etag":{
"type":"string",
"max":65536,
"min":1,
"pattern":"^.+$"
},
"S3FileLocation":{
"type":"structure",
"members":{
"Bucket":{
"shape":"S3Bucket",
"documentation":"<p>Specifies the S3 bucket that contains the file being used.</p>"
},
"Key":{
"shape":"S3Key",
"documentation":"<p>The name assigned to the file when it was created in Amazon S3. You use the object key to retrieve the object.</p>"
},
"VersionId":{
"shape":"S3VersionId",
"documentation":"<p>Specifies the file version.</p>"
},
"Etag":{
"shape":"S3Etag",
"documentation":"<p>The entity tag is a hash of the object. The ETag reflects changes only to the contents of an object, not its metadata.</p>"
}
},
"documentation":"<p>Specifies the details for the file location for the file that's being used in the workflow. Only applicable if you are using S3 storage.</p>"
},
"S3InputFileLocation":{
"type":"structure",
"members":{
"Bucket":{
"shape":"S3Bucket",
"documentation":"<p>Specifies the S3 bucket for the customer input file.</p>"
},
"Key":{
"shape":"S3Key",
"documentation":"<p>The name assigned to the file when it was created in Amazon S3. You use the object key to retrieve the object.</p>"
}
},
"documentation":"<p>Specifies the customer input S3 file location. If it is used inside <code>copyStepDetails.DestinationFileLocation</code>, it should be the S3 copy destination.</p> <p> You need to provide the bucket and key. The key can represent either a path or a file. This is determined by whether or not you end the key value with the forward slash (/) character. If the final character is \"/\", then your file is copied to the folder, and its name does not change. If, rather, the final character is alphanumeric, your uploaded file is renamed to the path value. In this case, if a file with that name already exists, it is overwritten. </p> <p>For example, if your path is <code>shared-files/bob/</code>, your uploaded files are copied to the <code>shared-files/bob/</code>, folder. If your path is <code>shared-files/today</code>, each uploaded file is copied to the <code>shared-files</code> folder and named <code>today</code>: each upload overwrites the previous version of the <i>bob</i> file.</p>"
},
"S3Key":{
"type":"string",
"max":1024,
"pattern":"[\\P{M}\\p{M}]*"
},
"S3Tag":{
"type":"structure",
"required":[
"Key",
"Value"
],
"members":{
"Key":{
"shape":"S3TagKey",
"documentation":"<p>The name assigned to the tag that you create.</p>"
},
"Value":{
"shape":"S3TagValue",
"documentation":"<p>The value that corresponds to the key.</p>"
}
},
"documentation":"<p>Specifies the key-value pair that are assigned to a file during the execution of a Tagging step.</p>"
},
"S3TagKey":{
"type":"string",
"max":128,
"min":1,
"pattern":"^([\\p{L}\\p{Z}\\p{N}_.:/=+\\-@]*)$"
},
"S3TagValue":{
"type":"string",
"max":256,
"pattern":"^([\\p{L}\\p{Z}\\p{N}_.:/=+\\-@]*)$"
},
"S3Tags":{
"type":"list",
"member":{"shape":"S3Tag"},
"max":10,
"min":1
},
"S3VersionId":{
"type":"string",
"max":1024,
"min":1,
"pattern":"^.+$"
},
"SecondaryGids":{
"type":"list",
"member":{"shape":"PosixId"},
"max":16,
"min":0
},
"SecurityGroupId":{
"type":"string",
"max":20,
"min":11,
"pattern":"^sg-[0-9a-f]{8,17}$"
},
"SecurityGroupIds":{
"type":"list",
"member":{"shape":"SecurityGroupId"}
},
"SecurityPolicyName":{
"type":"string",
"max":100,
"pattern":"TransferSecurityPolicy-.+"
},
"SecurityPolicyNames":{
"type":"list",
"member":{"shape":"SecurityPolicyName"}
},
"SecurityPolicyOption":{
"type":"string",
"max":50
},
"SecurityPolicyOptions":{
"type":"list",
"member":{"shape":"SecurityPolicyOption"}
},
"SendWorkflowStepStateRequest":{
"type":"structure",
"required":[
"WorkflowId",
"ExecutionId",
"Token",
"Status"
],
"members":{
"WorkflowId":{
"shape":"WorkflowId",
"documentation":"<p>A unique identifier for the workflow.</p>"
},
"ExecutionId":{
"shape":"ExecutionId",
"documentation":"<p>A unique identifier for the execution of a workflow.</p>"
},
"Token":{
"shape":"CallbackToken",
"documentation":"<p>Used to distinguish between multiple callbacks for multiple Lambda steps within the same execution.</p>"
},
"Status":{
"shape":"CustomStepStatus",
"documentation":"<p>Indicates whether the specified step succeeded or failed.</p>"
}
}
},
"SendWorkflowStepStateResponse":{
"type":"structure",
"members":{
}
},
"ServerId":{
"type":"string",
"max":19,
"min":19,
"pattern":"^s-([0-9a-f]{17})$"
},
"ServiceErrorMessage":{"type":"string"},
"ServiceMetadata":{
"type":"structure",
"required":["UserDetails"],
"members":{
"UserDetails":{
"shape":"UserDetails",
"documentation":"<p>The Server ID (<code>ServerId</code>), Session ID (<code>SessionId</code>) and user (<code>UserName</code>) make up the <code>UserDetails</code>.</p>"
}
},
"documentation":"<p>A container object for the session details that are associated with a workflow.</p>"
},
"ServiceUnavailableException":{
"type":"structure",
"members":{
"Message":{"shape":"ServiceErrorMessage"}
},
"documentation":"<p>The request has failed because the Amazon Web ServicesTransfer Family service is not available.</p>",
"exception":true,
"fault":true,
"synthetic":true
},
"SessionId":{
"type":"string",
"max":32,
"min":3,
"pattern":"^[\\w-]*$"
},
"SetStatOption":{
"type":"string",
"enum":[
"DEFAULT",
"ENABLE_NO_OP"
]
},
"SigningAlg":{
"type":"string",
"enum":[
"SHA256",
"SHA384",
"SHA512",
"SHA1",
"NONE"
]
},
"SourceFileLocation":{
"type":"string",
"max":256,
"pattern":"^\\$\\{(\\w+.)+\\w+\\}$"
},
"SourceIp":{
"type":"string",
"max":32,
"pattern":"^\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}$"
},
"SshPublicKey":{
"type":"structure",
"required":[
"DateImported",
"SshPublicKeyBody",
"SshPublicKeyId"
],
"members":{
"DateImported":{
"shape":"DateImported",
"documentation":"<p>Specifies the date that the public key was added to the user account.</p>"
},
"SshPublicKeyBody":{
"shape":"SshPublicKeyBody",
"documentation":"<p>Specifies the content of the SSH public key as specified by the <code>PublicKeyId</code>.</p> <p>Transfer Family accepts RSA, ECDSA, and ED25519 keys.</p>"
},
"SshPublicKeyId":{
"shape":"SshPublicKeyId",
"documentation":"<p>Specifies the <code>SshPublicKeyId</code> parameter contains the identifier of the public key.</p>"
}
},
"documentation":"<p>Provides information about the public Secure Shell (SSH) key that is associated with a user account for the specific file transfer protocol-enabled server (as identified by <code>ServerId</code>). The information returned includes the date the key was imported, the public key contents, and the public key ID. A user can store more than one SSH public key associated with their user name on a specific server.</p>"
},
"SshPublicKeyBody":{
"type":"string",
"max":2048
},
"SshPublicKeyCount":{"type":"integer"},
"SshPublicKeyId":{
"type":"string",
"max":21,
"min":21,
"pattern":"^key-[0-9a-f]{17}$"
},
"SshPublicKeys":{
"type":"list",
"member":{"shape":"SshPublicKey"},
"max":5
},
"StartFileTransferRequest":{
"type":"structure",
"required":[
"ConnectorId",
"SendFilePaths"
],
"members":{
"ConnectorId":{
"shape":"ConnectorId",
"documentation":"<p>The unique identifier for the connector. </p>"
},
"SendFilePaths":{
"shape":"FilePaths",
"documentation":"<p>An array of strings. Each string represents the absolute path for one outbound file transfer. For example, <code> <i>DOC-EXAMPLE-BUCKET</i>/<i>myfile.txt</i> </code>. </p>"
}
}
},
"StartFileTransferResponse":{
"type":"structure",
"required":["TransferId"],
"members":{
"TransferId":{
"shape":"TransferId",
"documentation":"<p>Returns the unique identifier for this file transfer. </p>"
}
}
},
"StartServerRequest":{
"type":"structure",
"required":["ServerId"],
"members":{
"ServerId":{
"shape":"ServerId",
"documentation":"<p>A system-assigned unique identifier for a server that you start.</p>"
}
}
},
"State":{
"type":"string",
"documentation":"<p>Describes the condition of a file transfer protocol-enabled server with respect to its ability to perform file operations. There are six possible states: <code>OFFLINE</code>, <code>ONLINE</code>, <code>STARTING</code>, <code>STOPPING</code>, <code>START_FAILED</code>, and <code>STOP_FAILED</code>.</p> <p> <code>OFFLINE</code> indicates that the server exists, but that it is not available for file operations. <code>ONLINE</code> indicates that the server is available to perform file operations. <code>STARTING</code> indicates that the server's was instantiated, but the server is not yet available to perform file operations. Under normal conditions, it can take a couple of minutes for the server to be completely operational. Both <code>START_FAILED</code> and <code>STOP_FAILED</code> are error conditions.</p>",
"enum":[
"OFFLINE",
"ONLINE",
"STARTING",
"STOPPING",
"START_FAILED",
"STOP_FAILED"
]
},
"StatusCode":{"type":"integer"},
"StepResultOutputsJson":{
"type":"string",
"max":65536
},
"StopServerRequest":{
"type":"structure",
"required":["ServerId"],
"members":{
"ServerId":{
"shape":"ServerId",
"documentation":"<p>A system-assigned unique identifier for a server that you stopped.</p>"
}
}
},
"SubnetId":{"type":"string"},
"SubnetIds":{
"type":"list",
"member":{"shape":"SubnetId"}
},
"Tag":{
"type":"structure",
"required":[
"Key",
"Value"
],
"members":{
"Key":{
"shape":"TagKey",
"documentation":"<p>The name assigned to the tag that you create.</p>"
},
"Value":{
"shape":"TagValue",
"documentation":"<p>Contains one or more values that you assigned to the key name you create.</p>"
}
},
"documentation":"<p>Creates a key-value pair for a specific resource. Tags are metadata that you can use to search for and group a resource for various purposes. You can apply tags to servers, users, and roles. A tag key can take more than one value. For example, to group servers for accounting purposes, you might create a tag called <code>Group</code> and assign the values <code>Research</code> and <code>Accounting</code> to that group.</p>"
},
"TagKey":{
"type":"string",
"max":128
},
"TagKeys":{
"type":"list",
"member":{"shape":"TagKey"},
"max":50,
"min":1
},
"TagResourceRequest":{
"type":"structure",
"required":[
"Arn",
"Tags"
],
"members":{
"Arn":{
"shape":"Arn",
"documentation":"<p>An Amazon Resource Name (ARN) for a specific Amazon Web Services resource, such as a server, user, or role.</p>"
},
"Tags":{
"shape":"Tags",
"documentation":"<p>Key-value pairs assigned to ARNs that you can use to group and search for resources by type. You can attach this metadata to user accounts for any purpose.</p>"
}
}
},
"TagStepDetails":{
"type":"structure",
"members":{
"Name":{
"shape":"WorkflowStepName",
"documentation":"<p>The name of the step, used as an identifier.</p>"
},
"Tags":{
"shape":"S3Tags",
"documentation":"<p>Array that contains from 1 to 10 key/value pairs.</p>"
},
"SourceFileLocation":{
"shape":"SourceFileLocation",
"documentation":"<p>Specifies which file to use as input to the workflow step: either the output from the previous step, or the originally uploaded file for the workflow.</p> <ul> <li> <p>Enter <code>${previous.file}</code> to use the previous file as the input. In this case, this workflow step uses the output file from the previous workflow step as input. This is the default value.</p> </li> <li> <p>Enter <code>${original.file}</code> to use the originally-uploaded file location as input for this step.</p> </li> </ul>"
}
},
"documentation":"<p>Each step type has its own <code>StepDetails</code> structure.</p> <p>The key/value pairs used to tag a file during the execution of a workflow step.</p>"
},
"TagValue":{
"type":"string",
"max":256
},
"Tags":{
"type":"list",
"member":{"shape":"Tag"},
"max":50,
"min":1
},
"TestIdentityProviderRequest":{
"type":"structure",
"required":[
"ServerId",
"UserName"
],
"members":{
"ServerId":{
"shape":"ServerId",
"documentation":"<p>A system-assigned identifier for a specific server. That server's user authentication method is tested with a user name and password.</p>"
},
"ServerProtocol":{
"shape":"Protocol",
"documentation":"<p>The type of file transfer protocol to be tested.</p> <p>The available protocols are:</p> <ul> <li> <p>Secure Shell (SSH) File Transfer Protocol (SFTP)</p> </li> <li> <p>File Transfer Protocol Secure (FTPS)</p> </li> <li> <p>File Transfer Protocol (FTP)</p> </li> </ul>"
},
"SourceIp":{
"shape":"SourceIp",
"documentation":"<p>The source IP address of the user account to be tested.</p>"
},
"UserName":{
"shape":"UserName",
"documentation":"<p>The name of the user account to be tested.</p>"
},
"UserPassword":{
"shape":"UserPassword",
"documentation":"<p>The password of the user account to be tested.</p>"
}
}
},
"TestIdentityProviderResponse":{
"type":"structure",
"required":[
"StatusCode",
"Url"
],
"members":{
"Response":{
"shape":"Response",
"documentation":"<p>The response that is returned from your API Gateway.</p>"
},
"StatusCode":{
"shape":"StatusCode",
"documentation":"<p>The HTTP status code that is the response from your API Gateway.</p>"
},
"Message":{
"shape":"Message",
"documentation":"<p>A message that indicates whether the test was successful or not.</p> <note> <p>If an empty string is returned, the most likely cause is that the authentication failed due to an incorrect username or password.</p> </note>"
},
"Url":{
"shape":"Url",
"documentation":"<p>The endpoint of the service used to authenticate a user.</p>"
}
}
},
"ThrottlingException":{
"type":"structure",
"members":{
"RetryAfterSeconds":{"shape":"RetryAfterSeconds"}
},
"documentation":"<p>The request was denied due to request throttling.</p>",
"exception":true
},
"TlsSessionResumptionMode":{
"type":"string",
"enum":[
"DISABLED",
"ENABLED",
"ENFORCED"
]
},
"TransferId":{
"type":"string",
"max":512,
"min":1,
"pattern":"^[0-9a-zA-Z./-]+$"
},
"UntagResourceRequest":{
"type":"structure",
"required":[
"Arn",
"TagKeys"
],
"members":{
"Arn":{
"shape":"Arn",
"documentation":"<p>The value of the resource that will have the tag removed. An Amazon Resource Name (ARN) is an identifier for a specific Amazon Web Services resource, such as a server, user, or role.</p>"
},
"TagKeys":{
"shape":"TagKeys",
"documentation":"<p>TagKeys are key-value pairs assigned to ARNs that can be used to group and search for resources by type. This metadata can be attached to resources for any purpose.</p>"
}
}
},
"UpdateAccessRequest":{
"type":"structure",
"required":[
"ServerId",
"ExternalId"
],
"members":{
"HomeDirectory":{
"shape":"HomeDirectory",
"documentation":"<p>The landing directory (folder) for a user when they log in to the server using the client.</p> <p>A <code>HomeDirectory</code> example is <code>/bucket_name/home/mydirectory</code>.</p>"
},
"HomeDirectoryType":{
"shape":"HomeDirectoryType",
"documentation":"<p>The type of landing directory (folder) that you want your users' home directory to be when they log in to the server. If you set it to <code>PATH</code>, the user will see the absolute Amazon S3 bucket or EFS paths as is in their file transfer protocol clients. If you set it <code>LOGICAL</code>, you need to provide mappings in the <code>HomeDirectoryMappings</code> for how you want to make Amazon S3 or Amazon EFS paths visible to your users.</p>"
},
"HomeDirectoryMappings":{
"shape":"HomeDirectoryMappings",
"documentation":"<p>Logical directory mappings that specify what Amazon S3 or Amazon EFS paths and keys should be visible to your user and how you want to make them visible. You must specify the <code>Entry</code> and <code>Target</code> pair, where <code>Entry</code> shows how the path is made visible and <code>Target</code> is the actual Amazon S3 or Amazon EFS path. If you only specify a target, it is displayed as is. You also must ensure that your Identity and Access Management (IAM) role provides access to paths in <code>Target</code>. This value can be set only when <code>HomeDirectoryType</code> is set to <i>LOGICAL</i>.</p> <p>The following is an <code>Entry</code> and <code>Target</code> pair example.</p> <p> <code>[ { \"Entry\": \"/directory1\", \"Target\": \"/bucket_name/home/mydirectory\" } ]</code> </p> <p>In most cases, you can use this value instead of the session policy to lock down your user to the designated home directory (\"<code>chroot</code>\"). To do this, you can set <code>Entry</code> to <code>/</code> and set <code>Target</code> to the <code>HomeDirectory</code> parameter value.</p> <p>The following is an <code>Entry</code> and <code>Target</code> pair example for <code>chroot</code>.</p> <p> <code>[ { \"Entry\": \"/\", \"Target\": \"/bucket_name/home/mydirectory\" } ]</code> </p>"
},
"Policy":{
"shape":"Policy",
"documentation":"<p>A session policy for your user so that you can use the same Identity and Access Management (IAM) role across multiple users. This policy scopes down a user's access to portions of their Amazon S3 bucket. Variables that you can use inside this policy include <code>${Transfer:UserName}</code>, <code>${Transfer:HomeDirectory}</code>, and <code>${Transfer:HomeBucket}</code>.</p> <note> <p>This policy applies only when the domain of <code>ServerId</code> is Amazon S3. Amazon EFS does not use session policies.</p> <p>For session policies, Transfer Family stores the policy as a JSON blob, instead of the Amazon Resource Name (ARN) of the policy. You save the policy as a JSON blob and pass it in the <code>Policy</code> argument.</p> <p>For an example of a session policy, see <a href=\"https://docs.aws.amazon.com/transfer/latest/userguide/session-policy.html\">Example session policy</a>.</p> <p>For more information, see <a href=\"https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html\">AssumeRole</a> in the <i>Amazon Web ServicesSecurity Token Service API Reference</i>.</p> </note>"
},
"PosixProfile":{"shape":"PosixProfile"},
"Role":{
"shape":"Role",
"documentation":"<p>The Amazon Resource Name (ARN) of the Identity and Access Management (IAM) role that controls your users' access to your Amazon S3 bucket or Amazon EFS file system. The policies attached to this role determine the level of access that you want to provide your users when transferring files into and out of your Amazon S3 bucket or Amazon EFS file system. The IAM role should also contain a trust relationship that allows the server to access your resources when servicing your users' transfer requests.</p>"
},
"ServerId":{
"shape":"ServerId",
"documentation":"<p>A system-assigned unique identifier for a server instance. This is the specific server that you added your user to.</p>"
},
"ExternalId":{
"shape":"ExternalId",
"documentation":"<p>A unique identifier that is required to identify specific groups within your directory. The users of the group that you associate have access to your Amazon S3 or Amazon EFS resources over the enabled protocols using Transfer Family. If you know the group name, you can view the SID values by running the following command using Windows PowerShell.</p> <p> <code>Get-ADGroup -Filter {samAccountName -like \"<i>YourGroupName</i>*\"} -Properties * | Select SamAccountName,ObjectSid</code> </p> <p>In that command, replace <i>YourGroupName</i> with the name of your Active Directory group.</p> <p>The regular expression used to validate this parameter is a string of characters consisting of uppercase and lowercase alphanumeric characters with no spaces. You can also include underscores or any of the following characters: =,.@:/-</p>"
}
}
},
"UpdateAccessResponse":{
"type":"structure",
"required":[
"ServerId",
"ExternalId"
],
"members":{
"ServerId":{
"shape":"ServerId",
"documentation":"<p>The identifier of the server that the user is attached to.</p>"
},
"ExternalId":{
"shape":"ExternalId",
"documentation":"<p>The external identifier of the group whose users have access to your Amazon S3 or Amazon EFS resources over the enabled protocols using Amazon Web ServicesTransfer Family.</p>"
}
}
},
"UpdateAgreementRequest":{
"type":"structure",
"required":[
"AgreementId",
"ServerId"
],
"members":{
"AgreementId":{
"shape":"AgreementId",
"documentation":"<p>A unique identifier for the agreement. This identifier is returned when you create an agreement.</p>"
},
"ServerId":{
"shape":"ServerId",
"documentation":"<p>A system-assigned unique identifier for a server instance. This is the specific server that the agreement uses.</p>"
},
"Description":{
"shape":"Description",
"documentation":"<p>To replace the existing description, provide a short description for the agreement. </p>"
},
"Status":{
"shape":"AgreementStatusType",
"documentation":"<p>You can update the status for the agreement, either activating an inactive agreement or the reverse.</p>"
},
"LocalProfileId":{
"shape":"ProfileId",
"documentation":"<p>A unique identifier for the AS2 local profile.</p> <p>To change the local profile identifier, provide a new value here.</p>"
},
"PartnerProfileId":{
"shape":"ProfileId",
"documentation":"<p>A unique identifier for the partner profile. To change the partner profile identifier, provide a new value here.</p>"
},
"BaseDirectory":{
"shape":"HomeDirectory",
"documentation":"<p>To change the landing directory (folder) for files that are transferred, provide the bucket folder that you want to use; for example, <code>/<i>DOC-EXAMPLE-BUCKET</i>/<i>home</i>/<i>mydirectory</i> </code>.</p>"
},
"AccessRole":{
"shape":"Role",
"documentation":"<p>With AS2, you can send files by calling <code>StartFileTransfer</code> and specifying the file paths in the request parameter, <code>SendFilePaths</code>. We use the files parent directory (for example, for <code>--send-file-paths /bucket/dir/file.txt</code>, parent directory is <code>/bucket/dir/</code>) to temporarily store a processed AS2 message file, store the MDN when we receive them from the partner, and write a final JSON file containing relevant metadata of the transmission. So, the <code>AccessRole</code> needs to provide read and write access to the parent directory of the file location used in the <code>StartFileTransfer</code> request. Additionally, you need to provide read and write access to the parent directory of the files that you intend to send with <code>StartFileTransfer</code>.</p>"
}
}
},
"UpdateAgreementResponse":{
"type":"structure",
"required":["AgreementId"],
"members":{
"AgreementId":{
"shape":"AgreementId",
"documentation":"<p>A unique identifier for the agreement. This identifier is returned when you create an agreement.</p>"
}
}
},
"UpdateCertificateRequest":{
"type":"structure",
"required":["CertificateId"],
"members":{
"CertificateId":{
"shape":"CertificateId",
"documentation":"<p>The identifier of the certificate object that you are updating.</p>"
},
"ActiveDate":{
"shape":"CertDate",
"documentation":"<p>An optional date that specifies when the certificate becomes active.</p>"
},
"InactiveDate":{
"shape":"CertDate",
"documentation":"<p>An optional date that specifies when the certificate becomes inactive.</p>"
},
"Description":{
"shape":"Description",
"documentation":"<p>A short description to help identify the certificate.</p>"
}
}
},
"UpdateCertificateResponse":{
"type":"structure",
"required":["CertificateId"],
"members":{
"CertificateId":{
"shape":"CertificateId",
"documentation":"<p>Returns the identifier of the certificate object that you are updating.</p>"
}
}
},
"UpdateConnectorRequest":{
"type":"structure",
"required":["ConnectorId"],
"members":{
"ConnectorId":{
"shape":"ConnectorId",
"documentation":"<p>The unique identifier for the connector.</p>"
},
"Url":{
"shape":"Url",
"documentation":"<p>The URL of the partner's AS2 endpoint.</p>"
},
"As2Config":{
"shape":"As2ConnectorConfig",
"documentation":"<p>A structure that contains the parameters for a connector object.</p>"
},
"AccessRole":{
"shape":"Role",
"documentation":"<p>With AS2, you can send files by calling <code>StartFileTransfer</code> and specifying the file paths in the request parameter, <code>SendFilePaths</code>. We use the files parent directory (for example, for <code>--send-file-paths /bucket/dir/file.txt</code>, parent directory is <code>/bucket/dir/</code>) to temporarily store a processed AS2 message file, store the MDN when we receive them from the partner, and write a final JSON file containing relevant metadata of the transmission. So, the <code>AccessRole</code> needs to provide read and write access to the parent directory of the file location used in the <code>StartFileTransfer</code> request. Additionally, you need to provide read and write access to the parent directory of the files that you intend to send with <code>StartFileTransfer</code>.</p>"
},
"LoggingRole":{
"shape":"Role",
"documentation":"<p>The Amazon Resource Name (ARN) of the Identity and Access Management (IAM) role that allows a connector to turn on CloudWatch logging for Amazon S3 events. When set, you can view connector activity in your CloudWatch logs.</p>"
}
}
},
"UpdateConnectorResponse":{
"type":"structure",
"required":["ConnectorId"],
"members":{
"ConnectorId":{
"shape":"ConnectorId",
"documentation":"<p>Returns the identifier of the connector object that you are updating.</p>"
}
}
},
"UpdateHostKeyRequest":{
"type":"structure",
"required":[
"ServerId",
"HostKeyId",
"Description"
],
"members":{
"ServerId":{
"shape":"ServerId",
"documentation":"<p>The identifier of the server that contains the host key that you are updating.</p>"
},
"HostKeyId":{
"shape":"HostKeyId",
"documentation":"<p>The identifier of the host key that you are updating.</p>"
},
"Description":{
"shape":"HostKeyDescription",
"documentation":"<p>An updated description for the host key.</p>"
}
}
},
"UpdateHostKeyResponse":{
"type":"structure",
"required":[
"ServerId",
"HostKeyId"
],
"members":{
"ServerId":{
"shape":"ServerId",
"documentation":"<p>Returns the server identifier for the server that contains the updated host key.</p>"
},
"HostKeyId":{
"shape":"HostKeyId",
"documentation":"<p>Returns the host key identifier for the updated host key.</p>"
}
}
},
"UpdateProfileRequest":{
"type":"structure",
"required":["ProfileId"],
"members":{
"ProfileId":{
"shape":"ProfileId",
"documentation":"<p>The identifier of the profile object that you are updating.</p>"
},
"CertificateIds":{
"shape":"CertificateIds",
"documentation":"<p>An array of identifiers for the imported certificates. You use this identifier for working with profiles and partner profiles.</p>"
}
}
},
"UpdateProfileResponse":{
"type":"structure",
"required":["ProfileId"],
"members":{
"ProfileId":{
"shape":"ProfileId",
"documentation":"<p>Returns the identifier for the profile that's being updated.</p>"
}
}
},
"UpdateServerRequest":{
"type":"structure",
"required":["ServerId"],
"members":{
"Certificate":{
"shape":"Certificate",
"documentation":"<p>The Amazon Resource Name (ARN) of the Amazon Web ServicesCertificate Manager (ACM) certificate. Required when <code>Protocols</code> is set to <code>FTPS</code>.</p> <p>To request a new public certificate, see <a href=\"https://docs.aws.amazon.com/acm/latest/userguide/gs-acm-request-public.html\">Request a public certificate</a> in the <i> Amazon Web ServicesCertificate Manager User Guide</i>.</p> <p>To import an existing certificate into ACM, see <a href=\"https://docs.aws.amazon.com/acm/latest/userguide/import-certificate.html\">Importing certificates into ACM</a> in the <i> Amazon Web ServicesCertificate Manager User Guide</i>.</p> <p>To request a private certificate to use FTPS through private IP addresses, see <a href=\"https://docs.aws.amazon.com/acm/latest/userguide/gs-acm-request-private.html\">Request a private certificate</a> in the <i> Amazon Web ServicesCertificate Manager User Guide</i>.</p> <p>Certificates with the following cryptographic algorithms and key sizes are supported:</p> <ul> <li> <p>2048-bit RSA (RSA_2048)</p> </li> <li> <p>4096-bit RSA (RSA_4096)</p> </li> <li> <p>Elliptic Prime Curve 256 bit (EC_prime256v1)</p> </li> <li> <p>Elliptic Prime Curve 384 bit (EC_secp384r1)</p> </li> <li> <p>Elliptic Prime Curve 521 bit (EC_secp521r1)</p> </li> </ul> <note> <p>The certificate must be a valid SSL/TLS X.509 version 3 certificate with FQDN or IP address specified and information about the issuer.</p> </note>"
},
"ProtocolDetails":{
"shape":"ProtocolDetails",
"documentation":"<p>The protocol settings that are configured for your server.</p> <ul> <li> <p> To indicate passive mode (for FTP and FTPS protocols), use the <code>PassiveIp</code> parameter. Enter a single dotted-quad IPv4 address, such as the external IP address of a firewall, router, or load balancer. </p> </li> <li> <p>To ignore the error that is generated when the client attempts to use the <code>SETSTAT</code> command on a file that you are uploading to an Amazon S3 bucket, use the <code>SetStatOption</code> parameter. To have the Transfer Family server ignore the <code>SETSTAT</code> command and upload files without needing to make any changes to your SFTP client, set the value to <code>ENABLE_NO_OP</code>. If you set the <code>SetStatOption</code> parameter to <code>ENABLE_NO_OP</code>, Transfer Family generates a log entry to Amazon CloudWatch Logs, so that you can determine when the client is making a <code>SETSTAT</code> call.</p> </li> <li> <p>To determine whether your Transfer Family server resumes recent, negotiated sessions through a unique session ID, use the <code>TlsSessionResumptionMode</code> parameter.</p> </li> <li> <p> <code>As2Transports</code> indicates the transport method for the AS2 messages. Currently, only HTTP is supported.</p> </li> </ul>"
},
"EndpointDetails":{
"shape":"EndpointDetails",
"documentation":"<p>The virtual private cloud (VPC) endpoint settings that are configured for your server. When you host your endpoint within your VPC, you can make your endpoint accessible only to resources within your VPC, or you can attach Elastic IP addresses and make your endpoint accessible to clients over the internet. Your VPC's default security groups are automatically assigned to your endpoint.</p>"
},
"EndpointType":{
"shape":"EndpointType",
"documentation":"<p>The type of endpoint that you want your server to use. You can choose to make your server's endpoint publicly accessible (PUBLIC) or host it inside your VPC. With an endpoint that is hosted in a VPC, you can restrict access to your server and resources only within your VPC or choose to make it internet facing by attaching Elastic IP addresses directly to it.</p> <note> <p> After May 19, 2021, you won't be able to create a server using <code>EndpointType=VPC_ENDPOINT</code> in your Amazon Web Servicesaccount if your account hasn't already done so before May 19, 2021. If you have already created servers with <code>EndpointType=VPC_ENDPOINT</code> in your Amazon Web Servicesaccount on or before May 19, 2021, you will not be affected. After this date, use <code>EndpointType</code>=<code>VPC</code>.</p> <p>For more information, see https://docs.aws.amazon.com/transfer/latest/userguide/create-server-in-vpc.html#deprecate-vpc-endpoint.</p> <p>It is recommended that you use <code>VPC</code> as the <code>EndpointType</code>. With this endpoint type, you have the option to directly associate up to three Elastic IPv4 addresses (BYO IP included) with your server's endpoint and use VPC security groups to restrict traffic by the client's public IP address. This is not possible with <code>EndpointType</code> set to <code>VPC_ENDPOINT</code>.</p> </note>"
},
"HostKey":{
"shape":"HostKey",
"documentation":"<p>The RSA, ECDSA, or ED25519 private key to use for your SFTP-enabled server. You can add multiple host keys, in case you want to rotate keys, or have a set of active keys that use different algorithms.</p> <p>Use the following command to generate an RSA 2048 bit key with no passphrase:</p> <p> <code>ssh-keygen -t rsa -b 2048 -N \"\" -m PEM -f my-new-server-key</code>.</p> <p>Use a minimum value of 2048 for the <code>-b</code> option. You can create a stronger key by using 3072 or 4096.</p> <p>Use the following command to generate an ECDSA 256 bit key with no passphrase:</p> <p> <code>ssh-keygen -t ecdsa -b 256 -N \"\" -m PEM -f my-new-server-key</code>.</p> <p>Valid values for the <code>-b</code> option for ECDSA are 256, 384, and 521.</p> <p>Use the following command to generate an ED25519 key with no passphrase:</p> <p> <code>ssh-keygen -t ed25519 -N \"\" -f my-new-server-key</code>.</p> <p>For all of these commands, you can replace <i>my-new-server-key</i> with a string of your choice.</p> <important> <p>If you aren't planning to migrate existing users from an existing SFTP-enabled server to a new server, don't update the host key. Accidentally changing a server's host key can be disruptive.</p> </important> <p>For more information, see <a href=\"https://docs.aws.amazon.com/transfer/latest/userguide/edit-server-config.html#configuring-servers-change-host-key\">Update host keys for your SFTP-enabled server</a> in the <i>Transfer Family User Guide</i>.</p>"
},
"IdentityProviderDetails":{
"shape":"IdentityProviderDetails",
"documentation":"<p>An array containing all of the information required to call a customer's authentication API method.</p>"
},
"LoggingRole":{
"shape":"NullableRole",
"documentation":"<p>The Amazon Resource Name (ARN) of the Identity and Access Management (IAM) role that allows a server to turn on Amazon CloudWatch logging for Amazon S3 or Amazon EFSevents. When set, you can view user activity in your CloudWatch logs.</p>"
},
"PostAuthenticationLoginBanner":{
"shape":"PostAuthenticationLoginBanner",
"documentation":"<p>Specifies a string to display when users connect to a server. This string is displayed after the user authenticates.</p> <note> <p>The SFTP protocol does not support post-authentication display banners.</p> </note>"
},
"PreAuthenticationLoginBanner":{
"shape":"PreAuthenticationLoginBanner",
"documentation":"<p>Specifies a string to display when users connect to a server. This string is displayed before the user authenticates. For example, the following banner displays details about using the system:</p> <p> <code>This system is for the use of authorized users only. Individuals using this computer system without authority, or in excess of their authority, are subject to having all of their activities on this system monitored and recorded by system personnel.</code> </p>"
},
"Protocols":{
"shape":"Protocols",
"documentation":"<p>Specifies the file transfer protocol or protocols over which your file transfer protocol client can connect to your server's endpoint. The available protocols are:</p> <ul> <li> <p> <code>SFTP</code> (Secure Shell (SSH) File Transfer Protocol): File transfer over SSH</p> </li> <li> <p> <code>FTPS</code> (File Transfer Protocol Secure): File transfer with TLS encryption</p> </li> <li> <p> <code>FTP</code> (File Transfer Protocol): Unencrypted file transfer</p> </li> <li> <p> <code>AS2</code> (Applicability Statement 2): used for transporting structured business-to-business data</p> </li> </ul> <note> <ul> <li> <p>If you select <code>FTPS</code>, you must choose a certificate stored in Certificate Manager (ACM) which is used to identify your server when clients connect to it over FTPS.</p> </li> <li> <p>If <code>Protocol</code> includes either <code>FTP</code> or <code>FTPS</code>, then the <code>EndpointType</code> must be <code>VPC</code> and the <code>IdentityProviderType</code> must be <code>AWS_DIRECTORY_SERVICE</code> or <code>API_GATEWAY</code>.</p> </li> <li> <p>If <code>Protocol</code> includes <code>FTP</code>, then <code>AddressAllocationIds</code> cannot be associated.</p> </li> <li> <p>If <code>Protocol</code> is set only to <code>SFTP</code>, the <code>EndpointType</code> can be set to <code>PUBLIC</code> and the <code>IdentityProviderType</code> can be set to <code>SERVICE_MANAGED</code>.</p> </li> <li> <p>If <code>Protocol</code> includes <code>AS2</code>, then the <code>EndpointType</code> must be <code>VPC</code>, and domain must be Amazon S3.</p> </li> </ul> </note>"
},
"SecurityPolicyName":{
"shape":"SecurityPolicyName",
"documentation":"<p>Specifies the name of the security policy that is attached to the server.</p>"
},
"ServerId":{
"shape":"ServerId",
"documentation":"<p>A system-assigned unique identifier for a server instance that the user account is assigned to.</p>"
},
"WorkflowDetails":{
"shape":"WorkflowDetails",
"documentation":"<p>Specifies the workflow ID for the workflow to assign and the execution role that's used for executing the workflow.</p> <p>In addition to a workflow to execute when a file is uploaded completely, <code>WorkflowDetails</code> can also contain a workflow ID (and execution role) for a workflow to execute on partial upload. A partial upload occurs when a file is open when the session disconnects.</p> <p>To remove an associated workflow from a server, you can provide an empty <code>OnUpload</code> object, as in the following example.</p> <p> <code>aws transfer update-server --server-id s-01234567890abcdef --workflow-details '{\"OnUpload\":[]}'</code> </p>"
}
}
},
"UpdateServerResponse":{
"type":"structure",
"required":["ServerId"],
"members":{
"ServerId":{
"shape":"ServerId",
"documentation":"<p>A system-assigned unique identifier for a server that the user account is assigned to.</p>"
}
}
},
"UpdateUserRequest":{
"type":"structure",
"required":[
"ServerId",
"UserName"
],
"members":{
"HomeDirectory":{
"shape":"HomeDirectory",
"documentation":"<p>The landing directory (folder) for a user when they log in to the server using the client.</p> <p>A <code>HomeDirectory</code> example is <code>/bucket_name/home/mydirectory</code>.</p>"
},
"HomeDirectoryType":{
"shape":"HomeDirectoryType",
"documentation":"<p>The type of landing directory (folder) that you want your users' home directory to be when they log in to the server. If you set it to <code>PATH</code>, the user will see the absolute Amazon S3 bucket or EFS paths as is in their file transfer protocol clients. If you set it <code>LOGICAL</code>, you need to provide mappings in the <code>HomeDirectoryMappings</code> for how you want to make Amazon S3 or Amazon EFS paths visible to your users.</p>"
},
"HomeDirectoryMappings":{
"shape":"HomeDirectoryMappings",
"documentation":"<p>Logical directory mappings that specify what Amazon S3 or Amazon EFS paths and keys should be visible to your user and how you want to make them visible. You must specify the <code>Entry</code> and <code>Target</code> pair, where <code>Entry</code> shows how the path is made visible and <code>Target</code> is the actual Amazon S3 or Amazon EFS path. If you only specify a target, it is displayed as is. You also must ensure that your Identity and Access Management (IAM) role provides access to paths in <code>Target</code>. This value can be set only when <code>HomeDirectoryType</code> is set to <i>LOGICAL</i>.</p> <p>The following is an <code>Entry</code> and <code>Target</code> pair example.</p> <p> <code>[ { \"Entry\": \"/directory1\", \"Target\": \"/bucket_name/home/mydirectory\" } ]</code> </p> <p>In most cases, you can use this value instead of the session policy to lock down your user to the designated home directory (\"<code>chroot</code>\"). To do this, you can set <code>Entry</code> to '/' and set <code>Target</code> to the HomeDirectory parameter value.</p> <p>The following is an <code>Entry</code> and <code>Target</code> pair example for <code>chroot</code>.</p> <p> <code>[ { \"Entry\": \"/\", \"Target\": \"/bucket_name/home/mydirectory\" } ]</code> </p>"
},
"Policy":{
"shape":"Policy",
"documentation":"<p>A session policy for your user so that you can use the same Identity and Access Management (IAM) role across multiple users. This policy scopes down a user's access to portions of their Amazon S3 bucket. Variables that you can use inside this policy include <code>${Transfer:UserName}</code>, <code>${Transfer:HomeDirectory}</code>, and <code>${Transfer:HomeBucket}</code>.</p> <note> <p>This policy applies only when the domain of <code>ServerId</code> is Amazon S3. Amazon EFS does not use session policies.</p> <p>For session policies, Transfer Family stores the policy as a JSON blob, instead of the Amazon Resource Name (ARN) of the policy. You save the policy as a JSON blob and pass it in the <code>Policy</code> argument.</p> <p>For an example of a session policy, see <a href=\"https://docs.aws.amazon.com/transfer/latest/userguide/session-policy\">Creating a session policy</a>.</p> <p>For more information, see <a href=\"https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html\">AssumeRole</a> in the <i>Amazon Web Services Security Token Service API Reference</i>.</p> </note>"
},
"PosixProfile":{
"shape":"PosixProfile",
"documentation":"<p>Specifies the full POSIX identity, including user ID (<code>Uid</code>), group ID (<code>Gid</code>), and any secondary groups IDs (<code>SecondaryGids</code>), that controls your users' access to your Amazon Elastic File Systems (Amazon EFS). The POSIX permissions that are set on files and directories in your file system determines the level of access your users get when transferring files into and out of your Amazon EFS file systems.</p>"
},
"Role":{
"shape":"Role",
"documentation":"<p>The Amazon Resource Name (ARN) of the Identity and Access Management (IAM) role that controls your users' access to your Amazon S3 bucket or Amazon EFS file system. The policies attached to this role determine the level of access that you want to provide your users when transferring files into and out of your Amazon S3 bucket or Amazon EFS file system. The IAM role should also contain a trust relationship that allows the server to access your resources when servicing your users' transfer requests.</p>"
},
"ServerId":{
"shape":"ServerId",
"documentation":"<p>A system-assigned unique identifier for a server instance that the user account is assigned to.</p>"
},
"UserName":{
"shape":"UserName",
"documentation":"<p>A unique string that identifies a user and is associated with a server as specified by the <code>ServerId</code>. This user name must be a minimum of 3 and a maximum of 100 characters long. The following are valid characters: a-z, A-Z, 0-9, underscore '_', hyphen '-', period '.', and at sign '@'. The user name can't start with a hyphen, period, or at sign.</p>"
}
}
},
"UpdateUserResponse":{
"type":"structure",
"required":[
"ServerId",
"UserName"
],
"members":{
"ServerId":{
"shape":"ServerId",
"documentation":"<p>A system-assigned unique identifier for a server instance that the user account is assigned to.</p>"
},
"UserName":{
"shape":"UserName",
"documentation":"<p>The unique identifier for a user that is assigned to a server instance that was specified in the request.</p>"
}
},
"documentation":"<p> <code>UpdateUserResponse</code> returns the user name and identifier for the request to update a user's properties.</p>"
},
"Url":{
"type":"string",
"max":255
},
"UserCount":{"type":"integer"},
"UserDetails":{
"type":"structure",
"required":[
"UserName",
"ServerId"
],
"members":{
"UserName":{
"shape":"UserName",
"documentation":"<p>A unique string that identifies a user account associated with a server.</p>"
},
"ServerId":{
"shape":"ServerId",
"documentation":"<p>The system-assigned unique identifier for a Transfer server instance. </p>"
},
"SessionId":{
"shape":"SessionId",
"documentation":"<p>The system-assigned unique identifier for a session that corresponds to the workflow.</p>"
}
},
"documentation":"<p>Specifies the user name, server ID, and session ID for a workflow.</p>"
},
"UserName":{
"type":"string",
"max":100,
"min":3,
"pattern":"^[\\w][\\w@.-]{2,99}$"
},
"UserPassword":{
"type":"string",
"max":1024,
"sensitive":true
},
"VpcEndpointId":{
"type":"string",
"max":22,
"min":22,
"pattern":"^vpce-[0-9a-f]{17}$"
},
"VpcId":{"type":"string"},
"WorkflowDescription":{
"type":"string",
"max":256,
"pattern":"^[\\w- ]*$"
},
"WorkflowDetail":{
"type":"structure",
"required":[
"WorkflowId",
"ExecutionRole"
],
"members":{
"WorkflowId":{
"shape":"WorkflowId",
"documentation":"<p>A unique identifier for the workflow.</p>"
},
"ExecutionRole":{
"shape":"Role",
"documentation":"<p>Includes the necessary permissions for S3, EFS, and Lambda operations that Transfer can assume, so that all workflow steps can operate on the required resources</p>"
}
},
"documentation":"<p>Specifies the workflow ID for the workflow to assign and the execution role that's used for executing the workflow.</p> <p>In addition to a workflow to execute when a file is uploaded completely, <code>WorkflowDetails</code> can also contain a workflow ID (and execution role) for a workflow to execute on partial upload. A partial upload occurs when a file is open when the session disconnects.</p>"
},
"WorkflowDetails":{
"type":"structure",
"members":{
"OnUpload":{
"shape":"OnUploadWorkflowDetails",
"documentation":"<p>A trigger that starts a workflow: the workflow begins to execute after a file is uploaded.</p> <p>To remove an associated workflow from a server, you can provide an empty <code>OnUpload</code> object, as in the following example.</p> <p> <code>aws transfer update-server --server-id s-01234567890abcdef --workflow-details '{\"OnUpload\":[]}'</code> </p>"
},
"OnPartialUpload":{
"shape":"OnPartialUploadWorkflowDetails",
"documentation":"<p>A trigger that starts a workflow if a file is only partially uploaded. You can attach a workflow to a server that executes whenever there is a partial upload.</p> <p>A <i>partial upload</i> occurs when a file is open when the session disconnects.</p>"
}
},
"documentation":"<p>Container for the <code>WorkflowDetail</code> data type. It is used by actions that trigger a workflow to begin execution.</p>"
},
"WorkflowId":{
"type":"string",
"max":19,
"min":19,
"pattern":"^w-([a-z0-9]{17})$"
},
"WorkflowStep":{
"type":"structure",
"members":{
"Type":{
"shape":"WorkflowStepType",
"documentation":"<p> Currently, the following step types are supported. </p> <ul> <li> <p> <i>COPY</i>: Copy the file to another location.</p> </li> <li> <p> <i>CUSTOM</i>: Perform a custom step with an Lambda function target.</p> </li> <li> <p> <i>DELETE</i>: Delete the file.</p> </li> <li> <p> <i>TAG</i>: Add a tag to the file.</p> </li> </ul>"
},
"CopyStepDetails":{
"shape":"CopyStepDetails",
"documentation":"<p>Details for a step that performs a file copy.</p> <p> Consists of the following values: </p> <ul> <li> <p>A description</p> </li> <li> <p>An S3 location for the destination of the file copy.</p> </li> <li> <p>A flag that indicates whether or not to overwrite an existing file of the same name. The default is <code>FALSE</code>.</p> </li> </ul>"
},
"CustomStepDetails":{
"shape":"CustomStepDetails",
"documentation":"<p>Details for a step that invokes a lambda function.</p> <p> Consists of the lambda function name, target, and timeout (in seconds). </p>"
},
"DeleteStepDetails":{
"shape":"DeleteStepDetails",
"documentation":"<p>Details for a step that deletes the file.</p>"
},
"TagStepDetails":{
"shape":"TagStepDetails",
"documentation":"<p>Details for a step that creates one or more tags.</p> <p>You specify one or more tags: each tag contains a key/value pair.</p>"
}
},
"documentation":"<p>The basic building block of a workflow.</p>"
},
"WorkflowStepName":{
"type":"string",
"max":30,
"pattern":"^[\\w-]*$"
},
"WorkflowStepType":{
"type":"string",
"enum":[
"COPY",
"CUSTOM",
"TAG",
"DELETE"
]
},
"WorkflowSteps":{
"type":"list",
"member":{"shape":"WorkflowStep"},
"max":8
}
},
"documentation":"<p>Transfer Family is a fully managed service that enables the transfer of files over the File Transfer Protocol (FTP), File Transfer Protocol over SSL (FTPS), or Secure Shell (SSH) File Transfer Protocol (SFTP) directly into and out of Amazon Simple Storage Service (Amazon S3) or Amazon EFS. Additionally, you can use Applicability Statement 2 (AS2) to transfer files into and out of Amazon S3. Amazon Web Services helps you seamlessly migrate your file transfer workflows to Transfer Family by integrating with existing authentication systems, and providing DNS routing with Amazon Route 53 so nothing changes for your customers and partners, or their applications. With your data in Amazon S3, you can use it with Amazon Web Services for processing, analytics, machine learning, and archiving. Getting started with Transfer Family is easy since there is no infrastructure to buy and set up.</p>"
}