New upstream version 1.29.27+repack

This commit is contained in:
Noah Meyerhans 2022-12-12 08:14:19 -08:00
parent 33e7cf81e5
commit 1beaa08a7f
1301 changed files with 632231 additions and 17289 deletions

View file

@ -5,6 +5,7 @@ include requirements-dev.txt
include botocore/cacert.pem
include botocore/vendored/requests/cacert.pem
recursive-include botocore/data *.json
recursive-include botocore/data *.json.gz
graft docs
prune docs/build
graft tests

44
NOTICE
View file

@ -1,9 +1,9 @@
Botocore
Copyright 2012-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
Copyright 2012-2022 Amazon.com, Inc. or its affiliates. All Rights Reserved.
----
Botocore includes a vendorized copy of the requests python library to ease installation.
Botocore includes vendorized parts of the requests python library for backwards compatibility.
Requests License
================
@ -22,8 +22,7 @@ Copyright 2013 Kenneth Reitz
See the License for the specific language governing permissions and
limitations under the License.
The requests library also includes some vendorized python libraries to ease installation.
Botocore includes vendorized parts of the urllib3 library for backwards compatibility.
Urllib3 License
===============
@ -49,38 +48,13 @@ FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TOR
OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.
Chardet License
===============
This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with this library; if not, write to the Free Software
Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
02110-1301 USA
Bundle of CA Root Certificates
==============================
This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
***** BEGIN LICENSE BLOCK *****
This Source Code Form is subject to the terms of the
Mozilla Public License, v. 2.0. If a copy of the MPL
was not distributed with this file, You can obtain
one at http://mozilla.org/MPL/2.0/.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with this library; if not, write to the Free Software
Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
02110-1301
***** END LICENSE BLOCK *****

View file

@ -1,6 +1,6 @@
Metadata-Version: 2.1
Name: botocore
Version: 1.26.8
Version: 1.29.27
Summary: Low-level, data-driven core of boto 3.
Home-page: https://github.com/boto/botocore
Author: Amazon Web Services
@ -13,12 +13,12 @@ Classifier: Natural Language :: English
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.6
Classifier: Programming Language :: Python :: 3.7
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Requires-Python: >= 3.6
Classifier: Programming Language :: Python :: 3.11
Requires-Python: >= 3.7
Provides-Extra: crt
License-File: LICENSE.txt
License-File: NOTICE
@ -43,7 +43,7 @@ on 2021-07-15. To avoid disruption, customers using Botocore on Python 2.7 may
need to upgrade their version of Python or pin the version of Botocore. For
more information, see this `blog post <https://aws.amazon.com/blogs/developer/announcing-end-of-support-for-python-2-7-in-aws-sdk-for-python-and-aws-cli-v1/>`__.
On 2022-05-30, we will be dropping support for Python 3.6. This follows the
On 2022-05-30, support was dropped for Python 3.6. This follows the
Python Software Foundation `end of support <https://www.python.org/dev/peps/pep-0494/#lifespan>`__
for the runtime which occurred on 2021-12-23.
For more information, see this `blog post <https://aws.amazon.com/blogs/developer/python-support-policy-updates-for-aws-sdks-and-tools/>`__.
@ -132,10 +132,10 @@ Maintenance and Support for SDK Major Versions
Botocore was made generally available on 06/22/2015 and is currently in the full support phase of the availability life cycle.
For information about maintenance and support for SDK major versions and their underlying dependencies, see the following in the AWS SDKs and Tools Shared Configuration and Credentials Reference Guide:
For information about maintenance and support for SDK major versions and their underlying dependencies, see the following in the AWS SDKs and Tools Reference Guide:
* `AWS SDKs and Tools Maintenance Policy <https://docs.aws.amazon.com/credref/latest/refdocs/maint-policy.html>`__
* `AWS SDKs and Tools Version Support Matrix <https://docs.aws.amazon.com/credref/latest/refdocs/version-support-matrix.html>`__
* `AWS SDKs and Tools Maintenance Policy <https://docs.aws.amazon.com/sdkref/latest/guide/maint-policy.html>`__
* `AWS SDKs and Tools Version Support Matrix <https://docs.aws.amazon.com/sdkref/latest/guide/version-support-matrix.html>`__
More Resources

View file

@ -18,7 +18,7 @@ on 2021-07-15. To avoid disruption, customers using Botocore on Python 2.7 may
need to upgrade their version of Python or pin the version of Botocore. For
more information, see this `blog post <https://aws.amazon.com/blogs/developer/announcing-end-of-support-for-python-2-7-in-aws-sdk-for-python-and-aws-cli-v1/>`__.
On 2022-05-30, we will be dropping support for Python 3.6. This follows the
On 2022-05-30, support was dropped for Python 3.6. This follows the
Python Software Foundation `end of support <https://www.python.org/dev/peps/pep-0494/#lifespan>`__
for the runtime which occurred on 2021-12-23.
For more information, see this `blog post <https://aws.amazon.com/blogs/developer/python-support-policy-updates-for-aws-sdks-and-tools/>`__.
@ -107,10 +107,10 @@ Maintenance and Support for SDK Major Versions
Botocore was made generally available on 06/22/2015 and is currently in the full support phase of the availability life cycle.
For information about maintenance and support for SDK major versions and their underlying dependencies, see the following in the AWS SDKs and Tools Shared Configuration and Credentials Reference Guide:
For information about maintenance and support for SDK major versions and their underlying dependencies, see the following in the AWS SDKs and Tools Reference Guide:
* `AWS SDKs and Tools Maintenance Policy <https://docs.aws.amazon.com/credref/latest/refdocs/maint-policy.html>`__
* `AWS SDKs and Tools Version Support Matrix <https://docs.aws.amazon.com/credref/latest/refdocs/version-support-matrix.html>`__
* `AWS SDKs and Tools Maintenance Policy <https://docs.aws.amazon.com/sdkref/latest/guide/maint-policy.html>`__
* `AWS SDKs and Tools Version Support Matrix <https://docs.aws.amazon.com/sdkref/latest/guide/version-support-matrix.html>`__
More Resources

View file

@ -16,7 +16,7 @@ import logging
import os
import re
__version__ = '1.26.8'
__version__ = '1.29.27'
class NullHandler(logging.Handler):
@ -28,6 +28,7 @@ class NullHandler(logging.Handler):
log = logging.getLogger('botocore')
log.addHandler(NullHandler())
_INITIALIZERS = []
_first_cap_regex = re.compile('(.)([A-Z][a-z]+)')
_end_cap_regex = re.compile('([a-z0-9])([A-Z])')
@ -97,3 +98,42 @@ def xform_name(name, sep='_', _xform_cache=_xform_cache):
transformed = _end_cap_regex.sub(r'\1' + sep + r'\2', s1).lower()
_xform_cache[key] = transformed
return _xform_cache[key]
def register_initializer(callback):
"""Register an initializer function for session creation.
This initializer function will be invoked whenever a new
`botocore.session.Session` is instantiated.
:type callback: callable
:param callback: A callable that accepts a single argument
of type `botocore.session.Session`.
"""
_INITIALIZERS.append(callback)
def unregister_initializer(callback):
"""Unregister an initializer function.
:type callback: callable
:param callback: A callable that was previously registered
with `botocore.register_initializer`.
:raises ValueError: If a callback is provided that is not currently
registered as an initializer.
"""
_INITIALIZERS.remove(callback)
def invoke_initializers(session):
"""Invoke all initializers for a session.
:type session: botocore.session.Session
:param session: The session to initialize.
"""
for initializer in _INITIALIZERS:
initializer(session)

View file

@ -23,10 +23,12 @@ import socket
import botocore.exceptions
import botocore.parsers
import botocore.serialize
import botocore.utils
from botocore.config import Config
from botocore.endpoint import EndpointCreator
from botocore.regions import EndpointResolverBuiltins as EPRBuiltins
from botocore.regions import EndpointRulesetResolver
from botocore.signers import RequestSigner
from botocore.utils import ensure_boolean, is_s3_accelerate_url
logger = logging.getLogger(__name__)
@ -83,6 +85,9 @@ class ClientArgsCreator:
scoped_config,
client_config,
endpoint_bridge,
auth_token=None,
endpoints_ruleset_data=None,
partition_data=None,
):
final_args = self.compute_client_args(
service_model,
@ -114,6 +119,7 @@ class ClientArgsCreator:
endpoint_config['signature_version'],
credentials,
event_emitter,
auth_token,
)
config_kwargs['s3'] = s3_config
@ -138,6 +144,21 @@ class ClientArgsCreator:
protocol, parameter_validation
)
response_parser = botocore.parsers.create_parser(protocol)
ruleset_resolver = self._build_endpoint_resolver(
endpoints_ruleset_data,
partition_data,
client_config,
service_model,
endpoint_region_name,
region_name,
endpoint_url,
endpoint,
is_secure,
endpoint_bridge,
event_emitter,
)
return {
'serializer': serializer,
'endpoint': endpoint,
@ -149,6 +170,7 @@ class ClientArgsCreator:
'client_config': new_config,
'partition': partition,
'exceptions_factory': self._exceptions_factory,
'endpoint_ruleset_resolver': ruleset_resolver,
}
def compute_client_args(
@ -169,7 +191,7 @@ class ClientArgsCreator:
elif scoped_config:
raw_value = scoped_config.get('parameter_validation')
if raw_value is not None:
parameter_validation = botocore.utils.ensure_boolean(raw_value)
parameter_validation = ensure_boolean(raw_value)
# Override the user agent if specified in the client config.
user_agent = self._user_agent
@ -211,12 +233,13 @@ class ClientArgsCreator:
retries=client_config.retries,
client_cert=client_config.client_cert,
inject_host_prefix=client_config.inject_host_prefix,
tcp_keepalive=client_config.tcp_keepalive,
)
self._compute_retry_config(config_kwargs)
self._compute_connect_timeout(config_kwargs)
s3_config = self.compute_s3_config(client_config)
is_s3_service = service_name in ['s3', 's3-control']
is_s3_service = self._is_s3_service(service_name)
if is_s3_service and 'dualstack' in endpoint_variant_tags:
if s3_config is None:
@ -231,7 +254,9 @@ class ClientArgsCreator:
'protocol': protocol,
'config_kwargs': config_kwargs,
's3_config': s3_config,
'socket_options': self._compute_socket_options(scoped_config),
'socket_options': self._compute_socket_options(
scoped_config, client_config
),
}
def compute_s3_config(self, client_config):
@ -253,6 +278,16 @@ class ClientArgsCreator:
return s3_configuration
def _is_s3_service(self, service_name):
"""Whether the service is S3 or S3 Control.
Note that throughout this class, service_name refers to the endpoint
prefix, not the folder name of the service in botocore/data. For
S3 Control, the folder name is 's3control' but the endpoint prefix is
's3-control'.
"""
return service_name in ['s3', 's3-control']
def _compute_endpoint_config(
self,
service_name,
@ -342,8 +377,10 @@ class ClientArgsCreator:
def _should_set_global_sts_endpoint(
self, region_name, endpoint_url, endpoint_config
):
endpoint_variant_tags = endpoint_config['metadata'].get('tags')
if endpoint_url or endpoint_variant_tags:
has_variant_tags = endpoint_config and endpoint_config.get(
'metadata', {}
).get('tags')
if endpoint_url or has_variant_tags:
return False
return (
self._get_sts_regional_endpoints_config() == 'legacy'
@ -382,16 +419,17 @@ class ClientArgsCreator:
service_name, region_name, endpoint_url, is_secure
)
def _compute_socket_options(self, scoped_config):
def _compute_socket_options(self, scoped_config, client_config=None):
# This disables Nagle's algorithm and is the default socket options
# in urllib3.
socket_options = [(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)]
if scoped_config:
# Enables TCP Keepalive if specified in shared config file.
if self._ensure_boolean(scoped_config.get('tcp_keepalive', False)):
socket_options.append(
(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
)
client_keepalive = client_config and client_config.tcp_keepalive
scoped_keepalive = scoped_config and self._ensure_boolean(
scoped_config.get("tcp_keepalive", False)
)
# Enables TCP Keepalive if specified in client config object or shared config file.
if client_keepalive or scoped_keepalive:
socket_options.append((socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1))
return socket_options
def _compute_retry_config(self, config_kwargs):
@ -462,3 +500,149 @@ class ClientArgsCreator:
return val
else:
return val.lower() == 'true'
def _build_endpoint_resolver(
self,
endpoints_ruleset_data,
partition_data,
client_config,
service_model,
endpoint_region_name,
region_name,
endpoint_url,
endpoint,
is_secure,
endpoint_bridge,
event_emitter,
):
if endpoints_ruleset_data is None:
return None
# The legacy EndpointResolver is global to the session, but
# EndpointRulesetResolver is service-specific. Builtins for
# EndpointRulesetResolver must not be derived from the legacy
# endpoint resolver's output, including final_args, s3_config,
# etc.
s3_config_raw = self.compute_s3_config(client_config) or {}
service_name_raw = service_model.endpoint_prefix
# Maintain complex logic for s3 and sts endpoints for backwards
# compatibility.
if service_name_raw in ['s3', 'sts'] or region_name is None:
eprv2_region_name = endpoint_region_name
else:
eprv2_region_name = region_name
resolver_builtins = self.compute_endpoint_resolver_builtin_defaults(
region_name=eprv2_region_name,
service_name=service_name_raw,
s3_config=s3_config_raw,
endpoint_bridge=endpoint_bridge,
client_endpoint_url=endpoint_url,
legacy_endpoint_url=endpoint.host,
)
# botocore does not support client context parameters generically
# for every service. Instead, the s3 config section entries are
# available as client context parameters. In the future, endpoint
# rulesets of services other than s3/s3control may require client
# context parameters.
client_context = (
s3_config_raw if self._is_s3_service(service_name_raw) else {}
)
sig_version = (
client_config.signature_version
if client_config is not None
else None
)
return EndpointRulesetResolver(
endpoint_ruleset_data=endpoints_ruleset_data,
partition_data=partition_data,
service_model=service_model,
builtins=resolver_builtins,
client_context=client_context,
event_emitter=event_emitter,
use_ssl=is_secure,
requested_auth_scheme=sig_version,
)
def compute_endpoint_resolver_builtin_defaults(
self,
region_name,
service_name,
s3_config,
endpoint_bridge,
client_endpoint_url,
legacy_endpoint_url,
):
# EndpointRulesetResolver rulesets may accept an "SDK::Endpoint" as
# input. If the endpoint_url argument of create_client() is set, it
# always takes priority.
if client_endpoint_url:
given_endpoint = client_endpoint_url
# If an endpoints.json data file other than the one bundled within
# the botocore/data directory is used, the output of legacy
# endpoint resolution is provided to EndpointRulesetResolver.
elif not endpoint_bridge.resolver_uses_builtin_data():
given_endpoint = legacy_endpoint_url
else:
given_endpoint = None
# The endpoint rulesets differ from legacy botocore behavior in whether
# forcing path style addressing in incompatible situations raises an
# exception or silently ignores the config setting. The
# AWS_S3_FORCE_PATH_STYLE parameter is adjusted both here and for each
# operation so that the ruleset behavior is backwards compatible.
if s3_config.get('use_accelerate_endpoint', False):
force_path_style = False
elif client_endpoint_url is not None and not is_s3_accelerate_url(
client_endpoint_url
):
force_path_style = s3_config.get('addressing_style') != 'virtual'
else:
force_path_style = s3_config.get('addressing_style') == 'path'
return {
EPRBuiltins.AWS_REGION: region_name,
EPRBuiltins.AWS_USE_FIPS: (
# SDK_ENDPOINT cannot be combined with AWS_USE_FIPS
given_endpoint is None
# use legacy resolver's _resolve_endpoint_variant_config_var()
# or default to False if it returns None
and endpoint_bridge._resolve_endpoint_variant_config_var(
'use_fips_endpoint'
)
or False
),
EPRBuiltins.AWS_USE_DUALSTACK: (
# SDK_ENDPOINT cannot be combined with AWS_USE_DUALSTACK
given_endpoint is None
# use legacy resolver's _resolve_use_dualstack_endpoint() and
# or default to False if it returns None
and endpoint_bridge._resolve_use_dualstack_endpoint(
service_name
)
or False
),
EPRBuiltins.AWS_STS_USE_GLOBAL_ENDPOINT: (
self._should_set_global_sts_endpoint(
region_name=region_name,
endpoint_url=None,
endpoint_config=None,
)
),
EPRBuiltins.AWS_S3_USE_GLOBAL_ENDPOINT: (
self._should_force_s3_global(region_name, s3_config)
),
EPRBuiltins.AWS_S3_ACCELERATE: s3_config.get(
'use_accelerate_endpoint', False
),
EPRBuiltins.AWS_S3_FORCE_PATH_STYLE: force_path_style,
EPRBuiltins.AWS_S3_USE_ARN_REGION: s3_config.get(
'use_arn_region', True
),
EPRBuiltins.AWS_S3CONTROL_USE_ARN_REGION: s3_config.get(
'use_arn_region', False
),
EPRBuiltins.AWS_S3_DISABLE_MRAP: s3_config.get(
's3_disable_multiregion_access_points', False
),
EPRBuiltins.SDK_ENDPOINT: given_endpoint,
}

View file

@ -35,7 +35,7 @@ from botocore.compat import (
urlsplit,
urlunsplit,
)
from botocore.exceptions import NoCredentialsError
from botocore.exceptions import NoAuthTokenError, NoCredentialsError
from botocore.utils import (
is_valid_ipv6_endpoint_url,
normalize_url_path,
@ -101,11 +101,22 @@ def _get_body_as_dict(request):
class BaseSigner:
REQUIRES_REGION = False
REQUIRES_TOKEN = False
def add_auth(self, request):
raise NotImplementedError("add_auth")
class TokenSigner(BaseSigner):
REQUIRES_TOKEN = True
"""
Signers that expect an authorization token to perform the authorization
"""
def __init__(self, auth_token):
self.auth_token = auth_token
class SigV2Auth(BaseSigner):
"""
Sign a request with Signature V2.
@ -934,6 +945,24 @@ class HmacV1PostAuth(HmacV1Auth):
request.context['s3-presign-post-policy'] = policy
class BearerAuth(TokenSigner):
"""
Performs bearer token authorization by placing the bearer token in the
Authorization header as specified by Section 2.1 of RFC 6750.
https://datatracker.ietf.org/doc/html/rfc6750#section-2.1
"""
def add_auth(self, request):
if self.auth_token is None:
raise NoAuthTokenError()
auth_header = f'Bearer {self.auth_token.token}'
if 'Authorization' in request.headers:
del request.headers['Authorization']
request.headers['Authorization'] = auth_header
AUTH_TYPE_MAPS = {
'v2': SigV2Auth,
'v3': SigV3Auth,
@ -942,6 +971,7 @@ AUTH_TYPE_MAPS = {
's3-query': HmacV1QueryAuth,
's3-presign-post': HmacV1PostAuth,
's3v4-presign-post': S3SigV4PostAuth,
'bearer': BearerAuth,
}
# Define v4 signers depending on if CRT is present

View file

@ -100,7 +100,7 @@ class AWSConnection:
def _convert_to_bytes(self, mixed_buffer):
# Take a list of mixed str/bytes and convert it
# all into a single bytestring.
# Any six.text_types will be encoded as utf-8.
# Any str will be encoded as utf-8.
bytes_buffer = []
for chunk in mixed_buffer:
if isinstance(chunk, str):
@ -299,7 +299,11 @@ def create_request_object(request_dict):
"""
r = request_dict
request_object = AWSRequest(
method=r['method'], url=r['url'], data=r['body'], headers=r['headers']
method=r['method'],
url=r['url'],
data=r['body'],
headers=r['headers'],
auth_path=r.get('auth_path'),
)
request_object.context = r['context']
return request_object

View file

@ -16,6 +16,7 @@ from botocore import waiter, xform_name
from botocore.args import ClientArgsCreator
from botocore.auth import AUTH_TYPE_MAPS
from botocore.awsrequest import prepare_request_dict
from botocore.config import Config
from botocore.discovery import (
EndpointDiscoveryHandler,
EndpointDiscoveryManager,
@ -26,6 +27,7 @@ from botocore.exceptions import (
DataNotFoundError,
InvalidEndpointDiscoveryConfigurationError,
OperationNotPageableError,
UnknownServiceError,
UnknownSignatureVersionError,
)
from botocore.history import get_global_history_recorder
@ -40,25 +42,37 @@ from botocore.retries import adaptive, standard
from botocore.utils import (
CachedProperty,
EventbridgeSignerSetter,
S3ArnParamHandler,
S3ControlArnParamHandler,
S3ControlEndpointSetter,
S3EndpointSetter,
S3RegionRedirector,
S3ControlArnParamHandlerv2,
S3RegionRedirectorv2,
ensure_boolean,
get_service_module_name,
)
# Keep these imported. There's pre-existing code that uses:
# "from botocore.client import Config"
# "from botocore.client import UNSIGNED"
# "from botocore.client import ClientError"
# etc.
from botocore.config import Config # noqa
from botocore.exceptions import ClientError # noqa
from botocore.args import ClientArgsCreator # noqa
from botocore.utils import S3ArnParamHandler # noqa
from botocore.utils import S3ControlArnParamHandler # noqa
from botocore.utils import S3ControlEndpointSetter # noqa
from botocore.utils import S3EndpointSetter # noqa
from botocore.utils import S3RegionRedirector # noqa
from botocore import UNSIGNED # noqa
_LEGACY_SIGNATURE_VERSIONS = frozenset(
(
'v2',
'v3',
'v3https',
'v4',
's3',
's3v4',
)
)
logger = logging.getLogger(__name__)
history_recorder = get_global_history_recorder()
@ -103,12 +117,27 @@ class ClientCreator:
scoped_config=None,
api_version=None,
client_config=None,
auth_token=None,
):
responses = self._event_emitter.emit(
'choose-service-name', service_name=service_name
)
service_name = first_non_none_response(responses, default=service_name)
service_model = self._load_service_model(service_name, api_version)
try:
endpoints_ruleset_data = self._load_service_endpoints_ruleset(
service_name, api_version
)
partition_data = self._loader.load_data('partitions')
except UnknownServiceError:
endpoints_ruleset_data = None
partition_data = None
logger.info(
'No endpoints ruleset found for service %s, falling back to '
'legacy endpoint routing.',
service_name,
)
cls = self._create_client_class(service_name, service_model)
region_name, client_config = self._normalize_fips_region(
region_name, client_config
@ -119,6 +148,9 @@ class ClientCreator:
client_config,
service_signing_name=service_model.metadata.get('signingName'),
config_store=self._config_store,
service_signature_version=service_model.metadata.get(
'signatureVersion'
),
)
client_args = self._get_client_args(
service_model,
@ -130,26 +162,20 @@ class ClientCreator:
scoped_config,
client_config,
endpoint_bridge,
auth_token,
endpoints_ruleset_data,
partition_data,
)
service_client = cls(**client_args)
self._register_retries(service_client)
self._register_eventbridge_events(
service_client, endpoint_bridge, endpoint_url
)
self._register_s3_events(
service_client,
endpoint_bridge,
endpoint_url,
client_config,
scoped_config,
)
self._register_s3_control_events(
service_client,
endpoint_bridge,
endpoint_url,
client_config,
scoped_config,
client=service_client,
endpoint_bridge=None,
endpoint_url=None,
client_config=client_config,
scoped_config=scoped_config,
)
self._register_s3_control_events(client=service_client)
self._register_endpoint_discovery(
service_client, endpoint_url, client_config
)
@ -205,6 +231,11 @@ class ClientCreator:
service_model = ServiceModel(json_model, service_name=service_name)
return service_model
def _load_service_endpoints_ruleset(self, service_name, api_version=None):
return self._loader.load_service_model(
service_name, 'endpoint-rule-set-1', api_version=api_version
)
def _register_retries(self, client):
retry_mode = client.meta.config.retries['mode']
if retry_mode == 'standard':
@ -346,17 +377,7 @@ class ClientCreator:
):
if client.meta.service_model.service_name != 's3':
return
S3RegionRedirector(endpoint_bridge, client).register()
S3ArnParamHandler().register(client.meta.events)
use_fips_endpoint = client.meta.config.use_fips_endpoint
S3EndpointSetter(
endpoint_resolver=self._endpoint_resolver,
region=client.meta.region_name,
s3_config=client.meta.config.s3,
endpoint_url=endpoint_url,
partition=client.meta.partition,
use_fips_endpoint=use_fips_endpoint,
).register(client.meta.events)
S3RegionRedirectorv2(None, client).register()
self._set_s3_presign_signature_version(
client.meta, client_config, scoped_config
)
@ -364,23 +385,14 @@ class ClientCreator:
def _register_s3_control_events(
self,
client,
endpoint_bridge,
endpoint_url,
client_config,
scoped_config,
endpoint_bridge=None,
endpoint_url=None,
client_config=None,
scoped_config=None,
):
if client.meta.service_model.service_name != 's3control':
return
use_fips_endpoint = client.meta.config.use_fips_endpoint
S3ControlArnParamHandler().register(client.meta.events)
S3ControlEndpointSetter(
endpoint_resolver=self._endpoint_resolver,
region=client.meta.region_name,
s3_config=client.meta.config.s3,
endpoint_url=endpoint_url,
partition=client.meta.partition,
use_fips_endpoint=use_fips_endpoint,
).register(client.meta.events)
S3ControlArnParamHandlerv2().register(client.meta.events)
def _set_s3_presign_signature_version(
self, client_meta, client_config, scoped_config
@ -429,7 +441,8 @@ class ClientCreator:
"""
Returns the 's3' (sigv2) signer if presigning an s3 request. This is
intended to be used to set the default signature version for the signer
to sigv2.
to sigv2. Situations where an asymmetric signature is required are the
exception, for example MRAP needs v4a.
:type signature_version: str
:param signature_version: The current client signature version.
@ -439,9 +452,12 @@ class ClientCreator:
:return: 's3' if the request is an s3 presign request, None otherwise
"""
if signature_version.startswith('v4a'):
return
for suffix in ['-query', '-presign-post']:
if signature_version.endswith(suffix):
return 's3' + suffix
return f's3{suffix}'
def _get_client_args(
self,
@ -454,6 +470,9 @@ class ClientCreator:
scoped_config,
client_config,
endpoint_bridge,
auth_token,
endpoints_ruleset_data,
partition_data,
):
args_creator = ClientArgsCreator(
self._event_emitter,
@ -473,6 +492,9 @@ class ClientCreator:
scoped_config,
client_config,
endpoint_bridge,
auth_token,
endpoints_ruleset_data,
partition_data,
)
def _create_methods(self, service_model):
@ -545,6 +567,7 @@ class ClientEndpointBridge:
default_endpoint=None,
service_signing_name=None,
config_store=None,
service_signature_version=None,
):
self.service_signing_name = service_signing_name
self.endpoint_resolver = endpoint_resolver
@ -552,6 +575,7 @@ class ClientEndpointBridge:
self.client_config = client_config
self.default_endpoint = default_endpoint or self.DEFAULT_ENDPOINT
self.config_store = config_store
self.service_signature_version = service_signature_version
def resolve(
self, service_name, region_name=None, endpoint_url=None, is_secure=True
@ -592,6 +616,9 @@ class ClientEndpointBridge:
service_name, region_name, endpoint_url, is_secure
)
def resolver_uses_builtin_data(self):
return self.endpoint_resolver.uses_builtin_data
def _check_default_region(self, service_name, region_name):
if region_name is not None:
return region_name
@ -606,10 +633,10 @@ class ClientEndpointBridge:
resolved, region_name, endpoint_url
)
if endpoint_url is None:
# Use the sslCommonName over the hostname for Python 2.6 compat.
hostname = resolved.get('sslCommonName', resolved.get('hostname'))
endpoint_url = self._make_url(
hostname, is_secure, resolved.get('protocols', [])
resolved.get('hostname'),
is_secure,
resolved.get('protocols', []),
)
signature_version = self._resolve_signature_version(
service_name, resolved
@ -765,9 +792,18 @@ class ClientEndpointBridge:
if configured_version is not None:
return configured_version
potential_versions = resolved.get('signatureVersions', [])
if (
self.service_signature_version is not None
and self.service_signature_version
not in _LEGACY_SIGNATURE_VERSIONS
):
# Prefer the service model as most specific
# source of truth for new signature versions.
potential_versions = [self.service_signature_version]
# Pick a signature version from the endpoint metadata if present.
if 'signatureVersions' in resolved:
potential_versions = resolved['signatureVersions']
if service_name == 's3':
return 's3v4'
if 'v4' in potential_versions:
@ -778,7 +814,7 @@ class ClientEndpointBridge:
if known in AUTH_TYPE_MAPS:
return known
raise UnknownSignatureVersionError(
signature_version=resolved.get('signatureVersions')
signature_version=potential_versions
)
@ -804,9 +840,11 @@ class BaseClient:
client_config,
partition,
exceptions_factory,
endpoint_ruleset_resolver=None,
):
self._serializer = serializer
self._endpoint = endpoint
self._ruleset_resolver = endpoint_ruleset_resolver
self._response_parser = response_parser
self._request_signer = request_signer
self._cache = {}
@ -839,6 +877,10 @@ class BaseClient:
f"'{self.__class__.__name__}' object has no attribute '{item}'"
)
def close(self):
"""Closes underlying endpoint connections."""
self._endpoint.close()
def _register_handlers(self):
# Register the handler required to sign requests.
service_id = self.meta.service_model.service_id.hyphenize()
@ -871,8 +913,15 @@ class BaseClient:
'has_streaming_input': operation_model.has_streaming_input,
'auth_type': operation_model.auth_type,
}
endpoint_url, additional_headers = self._resolve_endpoint_ruleset(
operation_model, api_params, request_context
)
request_dict = self._convert_to_request_dict(
api_params, operation_model, context=request_context
api_params=api_params,
operation_model=operation_model,
endpoint_url=endpoint_url,
context=request_context,
headers=additional_headers,
)
resolve_checksum_context(request_dict, operation_model, api_params)
@ -927,7 +976,13 @@ class BaseClient:
raise
def _convert_to_request_dict(
self, api_params, operation_model, context=None
self,
api_params,
operation_model,
endpoint_url,
context=None,
headers=None,
set_user_agent_header=True,
):
api_params = self._emit_api_params(
api_params, operation_model, context
@ -937,10 +992,16 @@ class BaseClient:
)
if not self._client_config.inject_host_prefix:
request_dict.pop('host_prefix', None)
if headers is not None:
request_dict['headers'].update(headers)
if set_user_agent_header:
user_agent = self._client_config.user_agent
else:
user_agent = None
prepare_request_dict(
request_dict,
endpoint_url=self._endpoint.host,
user_agent=self._client_config.user_agent,
endpoint_url=endpoint_url,
user_agent=user_agent,
context=context,
)
return request_dict
@ -970,6 +1031,56 @@ class BaseClient:
)
return api_params
def _resolve_endpoint_ruleset(
self,
operation_model,
params,
request_context,
ignore_signing_region=False,
):
"""Returns endpoint URL and list of additional headers returned from
EndpointRulesetResolver for the given operation and params. If the
ruleset resolver is not available, for example because the service has
no endpoints ruleset file, the legacy endpoint resolver's value is
returned.
Use ignore_signing_region for generating presigned URLs or any other
situtation where the signing region information from the ruleset
resolver should be ignored.
Returns tuple of URL and headers dictionary. Additionally, the
request_context dict is modified in place with any signing information
returned from the ruleset resolver.
"""
if self._ruleset_resolver is None:
endpoint_url = self.meta.endpoint_url
additional_headers = {}
else:
endpoint_info = self._ruleset_resolver.construct_endpoint(
operation_model=operation_model,
call_args=params,
request_context=request_context,
)
endpoint_url = endpoint_info.url
additional_headers = endpoint_info.headers
# If authSchemes is present, overwrite default auth type and
# signing context derived from service model.
auth_schemes = endpoint_info.properties.get('authSchemes')
if auth_schemes is not None:
auth_info = self._ruleset_resolver.auth_schemes_to_signing_ctx(
auth_schemes
)
auth_type, signing_context = auth_info
request_context['auth_type'] = auth_type
if 'region' in signing_context and ignore_signing_region:
del signing_context['region']
if 'signing' in request_context:
request_context['signing'].update(signing_context)
else:
request_context['signing'] = signing_context
return endpoint_url, additional_headers
def get_paginator(self, operation_name):
"""Create a paginator for an operation.

View file

@ -17,6 +17,7 @@ import sys
import inspect
import warnings
import hashlib
from http.client import HTTPMessage
import logging
import shlex
import re
@ -33,9 +34,7 @@ from urllib3 import exceptions
logger = logging.getLogger(__name__)
from botocore.vendored.six.moves import http_client
class HTTPHeaders(http_client.HTTPMessage):
class HTTPHeaders(HTTPMessage):
pass
from urllib.parse import (
@ -307,6 +306,7 @@ except ImportError:
# Vendoring IPv6 validation regex patterns from urllib3
# https://github.com/urllib3/urllib3/blob/7e856c0/src/urllib3/util/url.py
IPV4_PAT = r"(?:[0-9]{1,3}\.){3}[0-9]{1,3}"
IPV4_RE = re.compile("^" + IPV4_PAT + "$")
HEX_PAT = "[0-9A-Fa-f]{1,4}"
LS32_PAT = "(?:{hex}:{hex}|{ipv4})".format(hex=HEX_PAT, ipv4=IPV4_PAT)
_subs = {"hex": HEX_PAT, "ls32": LS32_PAT}

View file

@ -183,6 +183,12 @@ class Config:
endpoint resolution.
Defaults to None.
:type tcp_keepalive: bool
:param tcp_keepalive: Enables the TCP Keep-Alive socket option used when
creating new connections if set to True.
Defaults to False.
"""
OPTION_DEFAULTS = OrderedDict(
@ -205,6 +211,7 @@ class Config:
('use_dualstack_endpoint', None),
('use_fips_endpoint', None),
('defaults_mode', None),
('tcp_keepalive', None),
]
)
@ -276,11 +283,14 @@ class Config:
)
def _validate_retry_configuration(self, retries):
valid_options = ('max_attempts', 'mode', 'total_max_attempts')
valid_modes = ('legacy', 'standard', 'adaptive')
if retries is not None:
for key, value in retries.items():
if key not in ['max_attempts', 'mode', 'total_max_attempts']:
if key not in valid_options:
raise InvalidRetryConfigurationError(
retry_config_option=key
retry_config_option=key,
valid_options=valid_options,
)
if key == 'max_attempts' and value < 0:
raise InvalidMaxRetryAttemptsError(
@ -292,12 +302,11 @@ class Config:
provided_max_attempts=value,
min_value=1,
)
if key == 'mode' and value not in (
'legacy',
'standard',
'adaptive',
):
raise InvalidRetryModeError(provided_retry_mode=value)
if key == 'mode' and value not in valid_modes:
raise InvalidRetryModeError(
provided_retry_mode=value,
valid_modes=valid_modes,
)
def merge(self, other_config):
"""Merges the config object with another config object

View file

@ -11,13 +11,13 @@
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import configparser
import copy
import os
import shlex
import sys
import botocore.exceptions
from botocore.compat import six
def multi_file_load_config(*filenames):
@ -143,10 +143,10 @@ def raw_config_parse(config_filename, parse_subsections=True):
path = os.path.expanduser(path)
if not os.path.isfile(path):
raise botocore.exceptions.ConfigNotFound(path=_unicode_path(path))
cp = six.moves.configparser.RawConfigParser()
cp = configparser.RawConfigParser()
try:
cp.read([path])
except (six.moves.configparser.Error, UnicodeDecodeError) as e:
except (configparser.Error, UnicodeDecodeError) as e:
raise botocore.exceptions.ConfigParseError(
path=_unicode_path(path), error=e
) from None
@ -253,6 +253,7 @@ def build_profile_map(parsed_ini_config):
"""
parsed_config = copy.deepcopy(parsed_ini_config)
profiles = {}
sso_sessions = {}
final_config = {}
for key, values in parsed_config.items():
if key.startswith("profile"):
@ -262,6 +263,13 @@ def build_profile_map(parsed_ini_config):
continue
if len(parts) == 2:
profiles[parts[1]] = values
elif key.startswith("sso-session"):
try:
parts = shlex.split(key)
except ValueError:
continue
if len(parts) == 2:
sso_sessions[parts[1]] = values
elif key == 'default':
# default section is special and is considered a profile
# name but we don't require you use 'profile "default"'
@ -270,4 +278,5 @@ def build_profile_map(parsed_ini_config):
else:
final_config[key] = values
final_config['profiles'] = profiles
final_config['sso_sessions'] = sso_sessions
return final_config

View file

@ -22,7 +22,6 @@ import time
from collections import namedtuple
from copy import deepcopy
from hashlib import sha1
from pathlib import Path
from dateutil.parser import parse
from dateutil.tz import tzlocal, tzutc
@ -43,10 +42,12 @@ from botocore.exceptions import (
UnauthorizedSSOTokenError,
UnknownCredentialError,
)
from botocore.tokens import SSOTokenProvider
from botocore.utils import (
ContainerMetadataFetcher,
FileWebIdentityTokenLoader,
InstanceMetadataFetcher,
JSONFileCache,
SSOTokenLoader,
parse_key_val_file,
resolve_imds_endpoint_mode,
@ -223,6 +224,7 @@ class ProfileProviderBuilder:
profile_name=profile_name,
cache=self._cache,
token_cache=self._sso_token_cache,
token_provider=SSOTokenProvider(self._session),
)
@ -292,68 +294,6 @@ def create_mfa_serial_refresher(actual_refresh):
return _Refresher(actual_refresh)
class JSONFileCache:
"""JSON file cache.
This provides a dict like interface that stores JSON serializable
objects.
The objects are serialized to JSON and stored in a file. These
values can be retrieved at a later time.
"""
CACHE_DIR = os.path.expanduser(os.path.join('~', '.aws', 'boto', 'cache'))
def __init__(self, working_dir=CACHE_DIR, dumps_func=None):
self._working_dir = working_dir
if dumps_func is None:
dumps_func = self._default_dumps
self._dumps = dumps_func
def _default_dumps(self, obj):
return json.dumps(obj, default=_serialize_if_needed)
def __contains__(self, cache_key):
actual_key = self._convert_cache_key(cache_key)
return os.path.isfile(actual_key)
def __getitem__(self, cache_key):
"""Retrieve value from a cache key."""
actual_key = self._convert_cache_key(cache_key)
try:
with open(actual_key) as f:
return json.load(f)
except (OSError, ValueError):
raise KeyError(cache_key)
def __delitem__(self, cache_key):
actual_key = self._convert_cache_key(cache_key)
try:
key_path = Path(actual_key)
key_path.unlink()
except FileNotFoundError:
raise KeyError(cache_key)
def __setitem__(self, cache_key, value):
full_key = self._convert_cache_key(cache_key)
try:
file_content = self._dumps(value)
except (TypeError, ValueError):
raise ValueError(
f"Value cannot be cached, must be "
f"JSON serializable: {value}"
)
if not os.path.isdir(self._working_dir):
os.makedirs(self._working_dir)
with os.fdopen(
os.open(full_key, os.O_WRONLY | os.O_CREAT, 0o600), 'w'
) as f:
f.truncate()
f.write(file_content)
def _convert_cache_key(self, cache_key):
full_path = os.path.join(self._working_dir, cache_key + '.json')
return full_path
class Credentials:
"""
Holds the credentials needed to authenticate requests.
@ -1105,7 +1045,7 @@ class InstanceMetadataProvider(CredentialProvider):
metadata = fetcher.retrieve_iam_role_credentials()
if not metadata:
return None
logger.debug(
logger.info(
'Found credentials from IAM Role: %s', metadata['role_name']
)
# We manually set the data here, since we already made the request &
@ -2118,6 +2058,8 @@ class SSOCredentialFetcher(CachedCredentialFetcher):
token_loader=None,
cache=None,
expiry_window_seconds=None,
token_provider=None,
sso_session_name=None,
):
self._client_creator = client_creator
self._sso_region = sso_region
@ -2125,6 +2067,8 @@ class SSOCredentialFetcher(CachedCredentialFetcher):
self._account_id = account_id
self._start_url = start_url
self._token_loader = token_loader
self._token_provider = token_provider
self._sso_session_name = sso_session_name